I have got a new doorbell. It’s brilliant. It should be; it cost £89. It’s a Ring video doorbell; you’ll have seen them around. There are others available, made by other companies, with other four-letter names such as Nest and Arlo. When someone rings my doorbell, I’m alerted on my smartphone. I can see who is there, and speak to them.
My phone is ringing! C major first inversion chord, arpeggiated, repeated, for the musically trained – you’ll recognise it if you’ve heard it. It’s a delivery. Amazon, as it happens; Amazon acquired Ring in 2018, reportedly for more than $1bn.
“Hi, Amazon guy, I’m not in… I mean, I’m upstairs.” I’m not, but I don’t want him – or anyone else – to know that. “Could you leave it behind the bins, please?”
Visitors don’t even have to ring the bell. I can set it to alert me when there is motion up to nine metres away from the door. Or I can just open the app on my phone and get a live feed of the street. “A lot happens at your front door,” says Ring in its marketing spiel.
Something happened at Luke Exelby’s front door. Luke, a lorry driver, was at home in Dunstable, Bedfordshire, watching telly in bed with his wife at about one in the morning (he works nights and keeps unconventional hours). A notification on his phone went off, alerting him that there was something moving at the front door.
“I looked at it, and I saw a man was trying to get into our porch,” he tells me. Was he scared? “I’m quite a big bloke – I know that sounds a bit knobbish,” he laughs. “And to be honest he looked really old.” So Luke went downstairs. But by the time he got there, the man had scarpered.
In the morning Luke contacted the police, who sent round a forensics team. They told him there had been a couple of burglaries in the neighbourhood. Luke, who is signed up to a Ring Protect plan (from £2.50 a month), which allows him to save footage captured by his doorbell, shared his with the police. “Because we got a picture of the person’s face, and exactly where he put his hands on the door, they had his fingerprints. They could link his face and his fingerprints to the burglaries around the corner. They caught him straight away.”
Look on YouTube and you can find hours of footage captured by video doorbell cameras: attempted burglaries, package thefts, as well as some more bizarre episodes – weirdos, doorbell-lickers, even bears poking about (that was in California). A friend of a friend has a clip of a man having a poo on his neighbour’s doorstep. In the eight years since the Ring doorbell was invented (originally as Doorbot in 2013; its founder Jamie Siminoff appeared on Shark Tank, the American version of Dragons’ Den), it has evolved from a doorbell that replicates the “caller ID” on your phone into a self-installed global CCTV network. The millions of cameras around the world have not only provided the internet with a new genre of viral video, but fuelled the message boards of Neighbourhood Watch-style apps and groups.
Perhaps, most notably, it has even become a crime-solving tool: the last footage of Sarah Everard alive, before she was abducted while walking home in south London, was captured on a video doorbell. What seemed like a practical bit of kit has evolved far beyond its original scope. What next?
The police are certainly pleased about it. Det Supt Andy Smith of Suffolk constabulary first became aware of the benefits of this technology back in 2017. “One of Suffolk’s most prolific burglars was caught attempting to break into a residential property,” he tells me. “The occupier was away, but her doorbell system activated on her phone and she could see the individual trying to get in through the front door.”
She called the police, and they picked him up a couple of days later. The doorbell footage was instrumental, first in the police being alerted and, Smith says, “it actually recorded with some clarity the offence taking place. It was unequivocal evidence, very good facial capture.” The man pleaded guilty, and got a custodial sentence.
It inspired a collaboration: Ring gave Suffolk constabulary a number of doorbells to hand out in areas of higher crime. Smith says they have seen tangible results, and the scheme has been useful in tackling not just burglary, but also domestic violence, antisocial behaviour, car crime. He describes it as “a massive benefit in terms of fighting crime. I would encourage any member of the public to think about this or similar technology.” Ring have since handed out free or discounted doorbells to several other police forces, including Leicestershire, Humberside and Hertfordshire. In Wiltshire, residents with video doorbells are being asked to register on a police database.
Smith tells me about a couple of other incidents where a video doorbell camera has helped secure a conviction. A 45-year-old man from Lowestoft was caught on camera and subsequently jailed for attempted burglary. And a 40-year-old man, also from Lowestoft (is Lowestoft is the crime capital of Britain?) was convicted of the same offence with Ring’s help.
Smith says his force is using doorbell footage more and more often. “It features heavily in terms of house-to-house inquiries. If we have a major crime, then we will scope a particular area out.” This is happening in high-profile cases, too – police appealing to the public to check the footage on their doorbell cameras, or their car dashcams, to help their investigations.
In January this year, Corey Rice, 19, pleaded guilty at Sheffield Crown Court to wounding, attempted robbery and possession of a blade. While trying to steal a gold bracelet, he stabbed its owner twice on his own doorstep in Rotherham. The man’s girlfriend managed to get him into the house, covered in blood and struggling to breathe. He was taken to hospital where his chest was drained and his lung re-expanded. He survived. The incident was captured on their Ring doorbell.
Prosecutor Conor Quinn thinks the footage, which was presented to Rice’s legal team, played a big part in Rice’s decision to plead guilty. “Without it he may well have had a trial,” Quinn tells me. And who knows how that would have gone, “where you’ve got one person’s word against another. The footage was instrumental in supporting the complainant’s version of the incident.” Had Rice pleaded not guilty, Quinn says he would have played the footage in court. Rice was sentenced to seven years in prison.
I am already feeling more secure since I got my new doorbell. It’s as though I’m always at home (forget the fact that, thanks to the pandemic, I basically am always at home). Phone alert, ding ding ding. Here we go again. Not a ring at the bell this time, just motion near the door. And it’s only my girlfriend, coming home. Wonder why, at this time. I’ll ask her. “Hey!”
She jumps. “Fuck off, creepy talking doorbell spy,” she says, and goes inside, slamming the door, before I get the chance to ask her. I love my girlfriend, she’s such a luddite when it comes to new technology. Apologies for her language. Actually, why is she home, I wonder? I’m sure she said she was going to be out all day today. Maybe I’ll just keep it on live view for a while, then give her another little surprise when she comes out again.
It’s fun, watching out from my own front door, when I’m not there. There goes the bus – driver not wearing a mask, maybe I’ll report him? And that black cat, on the scrounge for food… Oh, and now doing a poo, not on the doorstep, like the horrible man on my friend’s friend’s neighbour’s, but in our raised bed, right on the radishes. And Paul over the road, off to work. Late start today, Paul.
Who are these two, at my door, ringing the bell? Jehovah’s Witnesses, perhaps? I’m not sure I like the look of them, to be honest – it’s probably just because I’ve never seen them before. I could save the footage and share it with my neighbours. Have you seen these two, do you know who they are, or what they’re up to? Posts like these are rife on neighbourhood sites such as Nextdoor, or on local WhatsApp or Facebook groups, increasingly popular since we all started spending so much time at home.
In the US, Ring has an app of its own, called Neighbors, which lets people share, view and comment on crime and security information in their communities. It’s not available in the UK at the moment, and Ring won’t say whether it’s going to be. But the company has filed a patent for creating a “suspicious persons” database, using images taken by the doorbells. The machines currently don’t have facial recognition capabilities, unlike some rival products such as Google Nest.
More than 2,000 US police and fire departments have partnered with Ring. This allows them to contact users in a particular area and ask them to provide footage from the app to help with an investigation. In 2020, requests for footage were made relating to 22,335 incidents. Some police departments have offered discounted or free Ring doorbells in exchange for a promise to register them with law enforcement and submit requested footage.
But, in contrast to the experience of Suffolk constabulary’s Smith, US media reports have disputed Ring’s crime-busting effectiveness. In spite of some high-profile cases where a doorbell captured footage of a crime (the kidnapping of an eight-year-old girl in Fort Worth, for example), an investigation by NBC News found that there was little evidence of Ring leading to arrests or reducing crime overall. Rather, police were spending a lot of time reviewing footage of raccoons.
Ring says it doesn’t have any formal partnerships with police forces in the UK. “Police forces do not have access to Ring customers’ devices, recorded videos or live streams,” a spokesperson told me. “Police in the UK only have access to customers’ video recordings if a customer chooses to download and share them. Customers are in total control of the information they choose to share.”
They wouldn’t tell me how many Ring doorbells they’ve sold in the UK or in the world, but in various official communications they have referred to “millions”. In my road, roughly a quarter of doorbells are now video doorbells. In Luke Exelby’s street in Dunstable, it’s about half, he says.
Not everyone is thrilled about this. Silkie Carlo of civil liberties organisation Big Brother Watch has concerns about who else might be watching. She points towards reporting by The Intercept in 2019 which found Ring customer video feeds had been accessible, unencrypted, to the company’s Ukraine-based research and development team.
Carlo says it’s about data collection. “That’s the purpose of these devices; we’re really just on the precipice of this as an issue.” You buy the device, sign up to the plan, “then you’re in this data-sharing, cloud storage relationship with them, paying monthly fees. Their ability to be in your home, in your domestic environment, is hugely profitable, probably more so than the product.”
Mariano delli Santi, legal and policy officer at digital campaigning organisation Open Rights Group, says it’s part of a fundamental shift in the very nature of the internet. “The internet didn’t used to be a place where people were surveilled. Do you remember a cartoon of a dog surfing the internet, which says: on the internet, nobody knows you’re a dog? That’s what it used to be like.”
His example of how far it has come from that, and everyone (and his dog, presumably) knowing you’re a dog? “The United States surveillance programmes that were covered extensively by your newspaper.” He’s talking about the NSA files, as revealed by Edward Snowden in 2013. “The government realised that corporations had a huge pool of data about what people were conducting online. And they could just access that with data access requests.”
He’s not saying the same is going on with footage from video doorbells, only that it could. And that a network of cameras provided by the same company can be – and has been – abused. “It was abused, for example, during Black Lives Matter protests [in California in 2020]: police authorities in the US sent requests to owners of Ring doorbells to identify the people who were protesting.”
This kind of technology can promote racial profiling. In the US in 2019, Vice looked at more than 100 videos posted on the Neighbors app over a two‑month period, and found that the majority of people reported as “suspicious” were people of colour. In the same year, US Democratic senator Edward Markey wrote to Amazon chief executive Jeff Bezos raising concerns that collaborations between Ring and law enforcement could disproportionately affect minorities. He said sharing footage with police “could easily create a surveillance network that places dangerous burdens on people of colour” and fuel “racial anxieties”. More than 30 civil rights organisations wrote an open letter calling on US government officials to end Amazon Ring’s police partnerships.
Chris Gilliard, an expert in privacy and surveillance, as well as a professor of English at Macomb Community College, near Detroit, wasn’t surprised by the Vice reporting. “The problem with these technologies is that they exacerbate and allow people to amplify their existing prejudices,” he tells me on the phone from Michigan. “So if Ring didn’t exist, or Neighbors didn’t exist, and a racist person saw a black guy riding his bike down the street and they thought, ‘Oh, that guy doesn’t live in our neighbourhood,’ they had limited options of what they could do. They couldn’t take to a platform and broadcast it to dozens or hundreds of people.”
Ring has come under fire for a number of security breaches, with hackers able to access systems remotely. In 2019 an investigation by tech website Gizmodo found it could pinpoint the locations of tens of thousands of Ring users using data from posts on the Neighbors app. In January last year, four Ring employees were sacked for accessing customer video feeds in a manner that “exceeded what was necessary for their job functions”.
Ring says protecting customers’ privacy, security and control over their devices and personal information is paramount to them. In 2020, they launched an in-app dashboard that allows users to change privacy and security settings. They have also introduced a second layer of verification to help prevent unauthorised users gaining access to a Ring account, and will soon be rolling out end-to-end encryption to UK customers. Ring says that none of its employees have unrestricted access to customer data and all personal information is treated as highly confidential.
Gilliard, in Michigan, sees a sinister corporate plan. “A thing like Ring belongs on the entire spectrum of Amazon’s move towards surveillance and control – not only of workers, but also of consumers, and of space in general,” he says. “The intent is to create a massive web of surveillance in an attempt to try to shape the way people live their lives. It’s an attempt to replace a real sense of community with a notion of community that’s mediated by Amazon.”
Big Brother Watch’s Carlo has further concerns about what this kind of tech is doing to us. Is Silicon Valley enabling a generation of digital curtain-twitchers? “It effectively changes the nature of the world we live in,” she says. “The fact that when you walk down a street, your presence is being logged.”
Meet David from London – he’d rather not share his surname. He and his wife got a Ring doorbell after they moved into their new house, when their toddler was a baby. They were getting a lot of deliveries, and often weren’t in to receive them. “It’s very useful to be able to say: ‘Can you put it behind the bin,’” he says.
Plus they live in an area where there is some crime and antisocial behaviour. “It does make us feel a bit more secure.” Then there was an incident, a postman ringing the bell when neither of them was at home. “You can see him muttering something, I couldn’t quite make it out, but something like ‘for fuck’s sake’ or ‘fucking typical’. It was quite aggressive.”
David, who is signed up to the Ring Protect Plan, tweeted Royal Mail, attaching the footage. They said it wasn’t clear what the postman had said; as far as he knows, no action was taken. How would David have felt if the postie had been fired, I wonder, for swearing in frustration at work – something everyone has done – when he thought he was alone? Without the Ring doorbell, the incident wouldn’t have been an incident; David would never have known, and just come home to a note on the doormat. “It did make me think about that complaining culture and whether we are snooping,” he admits.
David says that his street’s WhatsApp group does sometimes share footage of people they think look suspicious, particularly after, say, someone’s car has been broken into. This, says Carlo, is a dangerous path to go down. “Neighbourhood citizen policing – we’re talking about a personal-tech-based surveillance state. I don’t think we’re there now, but in five, six, seven years we could create that kind of environment.”
David talks to his toddler on the doorbell, who calls it the ding-dong. Sometimes he uses it to check that their cleaner isn’t cutting hours; their previous cleaner was consistently leaving 20 minutes early. Babysitters, too. “I think it’s useful to have in the back of your mind that you know when people are coming or going.”
It is turning us all into spies, then.Carlo thinks so. “New technology lends itself to that. If you think, even 10 years ago, the lengths someone would have to go to, to get this kind of covert CCTV, with motion sensors, in the home. Now it’s the default, in a way.”
She thinks it is selling fear, because fear is almost as profitable as data – and that there are further dangers, even within the domestic environment. “You are recording the details of your life, and you can see how, when there is conflict, that could easily become part of the picture. Imagine what that would mean in the context of an abusive or controlling relationship: ‘You say you got back at 12 last night, but actually it was 12.30, or 1am.’ Or, ‘Why were you with that person?’”
Interesting that earlier, Det Supt Smith – who, incidentally, is fully aware of the civil liberties issues – was talking about how this technology is useful in fighting domestic violence; and now Carlo is talking about how it could also form part of the picture of domestic abuse or coercive control. Both right, I’m sure. Then there’s Luke Exelby, who says one of the reasons he got a Ring doorbell in the first place was to check up – in a worried dad way – on his four teenage daughters while he’s off working nights. “I keep telling them: text me when you get home. They never do, though. The notifications let me know when they get home. My kids know I’m not trying to spy on them.”
Ding ding ding, phone alert! It’s my girlfriend, leaving the house. She looks over at the doorbell, at me; she knows. Then she comes a bit closer, with a look that says don’t you bloody dare. Think I’ll leave it this time.
Analysis Startup QuSecure will this week introduce a service aimed at addressing how to safeguard cybersecurity once quantum computing renders current public key encryption technologies vulnerable.
It’s unclear when quantum computers will easily crack classical crypto – estimates range from three to five years to never – but conventional wisdom is that now’s the time to start preparing to ensure data remains encrypted.
A growing list of established vendors like IBM and Google and smaller startups – Quantum Xchange and Quantinuum, among others – have worked on this for several years. QuSecure, which is launching this week after three years in stealth mode, will offer a fully managed service approach with QuProtect, which is designed to not only secure data now against conventional threats but also against future attacks from nation-states and bad actors leveraging quantum systems.
“The current and near-term capability in quantum computing, which would allow for the decryption, is the big threat,” Mike Brown, a retired Navy rear admiral and former senior cybersecurity specialist with the Department of Defense (DoD) and Homeland Security (DHS), told The Register. “That’s what we’ve been talking about for years.”
Brown, founder and president of security consultancy Spinnaker Security, who now onsults with QuSecure and other companies, said there has been steady progress in building up the capabilities of quantum computers in the US and abroad. He points out that nation-states with a checkered history in cyberspace, such as China, are spending huge sums and mounting massive efforts to develop such systems.
Steal now, decrypt later
A key worry is what is known as “steal now, decrypt later,” QuSecure co-founder and COO Skip Sanzeri told The Register.
“This is the biggest problem, where data gets exfiltrated and it sits on servers waiting to be decrypted. If that data has 50 or 75 years of life left in its value [and] you crack it in 10 years, that’s 40 to 65 years of value. This is the problem,” Sanzeri said.
“This is why things need to happen. We’re getting a lot of inbound inquiries from both federal and commercial [entities]. We’ve got pilots going across both sides of it. People are now starting to take it seriously.”
Warning: China planning to swipe a bunch of data soon so quantum computers can decrypt it later
In addition, a bipartisan bill – dubbed the Endless Frontiers Act – calls for spending $100 billion on emerging technologies, including quantum computing and artificial intelligence, to close the innovation gap with China. The bill is moving through Congress.
Another bill, the Quantum Computing Cybersecurity Preparedness Act, is also finding bipartisan support to ensure that government systems adopt post-quantum cryptography by securing systems with algorithms and encryption that will be difficult for even quantum computers to break.
The USA’s National Institute of Standards and Technology (NIST) is undergoing a multi-year process of setting such standards, with the hopes of publishing those by 2024.
The promise of quantum
Quantum computers promise to solve problems that are out of reach of today’s supercomputers.
Classical computing elements are bits, which can be either 0 or 1. Quantum computing uses qubits, can be 0, 1 or any combination – what’s referred to as a superposition. The concern is quantum systems will easily be able to break encryption methods that would take the most powerful machines today years to crack.
Like other vendors, QuSecure is working to address these challenges. It’s QuProtect as-a-service architecture includes a software suite that combines zero-trust, post-quantum cryptography, quantum-strength keys and active defense. It leverages Quantum Random Number Generation (QRNG) to create truer randomness in the encryption keys, which is central to secure encryption because patterns in keys can often be detected by cryptanalysts.
The architecture also relies on a proprietary technique that enables QuSecure to get this protection out to the various endpoints, from on-premises servers and web browsers to the Internet of Things and the edge, while also ensuring the security of the networks that data traverses.
“We now have a way to create a quantum channel without putting software out on all these devices,” Sanzeri said. “This method that we’ve discovered and are using … allows us to create quantum channels rapidly between any end devices. If you think of IoT and edge, a lot of time those little sensors don’t have any storage capacity, almost no compute capacity aside from doing the one job they do. But we can still secure those.”
That said, if an enterprise or government agency needed to keep its data behind a firewall, QuSecure will manage it on-premises or in a private cloud.
QuSecure also built software interfaces, a UI and protocol switch and developed the ability to send encryption keys. It also partners with companies like Quintessence Labs and ID Quantique for QRNG.
In addition, it has what Sanzeri called “crypto agility.” The architecture is optimized for all the algorithm finalists in the NIST program, so it doesn’t matter which ones the organization eventually chooses, it will be supported by the QuSecure service.
Webdoctor CEO David Crimmins offers up his insights into the growth of telehealth in Ireland and worldwide.
The pandemic has resulted in an unprecedented shift to healthcare being delivered outside of the traditional clinical settings. While businesses and industries in marketplaces across the world were forced to pivot their services or close their doors for a period of time over the last two years, the pandemic created an opportunity for the telehealth sector as patient demand for virtual healthcare soared rapidly.
Digital health offerings are not new services per se. In fact, Webdoctor was established in 2013. And whilst telemedicine was already on the rise before Covid-19, the pandemic put a magnified spotlight on the sector.
Recent reports show the global market is projected to grow to $185.6bn by 2026, with 83pc of patients saying they expect to use telemedicine post pandemic. We’ve already seen an indication of this in the Irish marketplace with the demand for Webdoctor consultations up 226pc in 2021 compared to 2019 – the last full year before the pandemic.
This trend is backed up by another recent report, which surveyed hundreds of clinicians around the world. More than half (56pc) of doctors surveyed predicted that they will make most of their clinical decisions using artificial intelligence tools within the next 10 years.
With the telehealth space evolving at a rapid pace both domestically and internationally, digital healthcare platforms and technologies are fast becoming much more than just a convenient alternative.
Mirroring global trends in the telehealth sector, results from the latest National Health Watch report conducted by Webdoctor illustrate that while the demand for online GP services may have increased out of necessity due to Covid 19, it is now the preferred service option for the majority.
For example, given the choice, 60pc of people would prefer to use an online GP or prescription service instead of going to an in-person consultation for general health concerns. This figure rises when it comes to specific concerns such as erectile dysfunction (85pc), hair loss (70pc) or sexual health checks (77pc).
This demand, combined with lengthy waiting times for physical in-person GP appointments, is driving mass growth for online GP and prescription services like Webdoctor and other health-tech platforms.
Telemedicine also offers employers a real opportunity to implement digital healthcare offerings as part of their employee benefits strategies. A recent study from Mercer revealed that 68pc of employers globally expect to increase their investment in digital health and wellbeing, while 40pc of employees say they would be more likely to stay with a company that offers digital health services. By looking after the wellbeing of your workforce through these benefits, you are contributing to the overall long-term success of your business.
In addition, employers in traditional healthcare businesses such as a GP practice or pharmacy, should seize the opportunity to expand and implement new telemedicine technology where possible. The sector is constantly evolving and by using digital tools to complement traditional care, it offers the opportunity to broaden their current offering, improve patient care and potentially increase profits.
Remote monitoring with wearables
So, given the swift pace of progress within the sector, what innovations are coming down the track?
Wearable technology has become a regular part of our everyday lives and is significantly changing how we collect and analyse health-related data. These devices range from smartwatches to virtual at-home health monitors such as Pulsewave, a modern alternative to the traditional arm cuff to measure blood pressure.
A key benefit of wearable sensors is that by providing real time data and enabling people to track their progress, they are encouraging patients to take a more active role in their health. This is something everyone could gain from.
As more digital healthcare platforms incorporate remote patient monitoring utilising wearable technology, it could lead to a more diverse range of results which would help create more accurate diagnoses that ultimately would result in better patient treatment and outcomes.
Increased patient autonomy
Digital healthcare platforms can give patients direct instant access to their medical records or provide them with self-tracking devices. This gives people the opportunity to take control of their health.
As the sector continues to evolve, patient autonomy is likely to continue to increase. While this is a positive outcome for patients, it will be important not to lose the personal interaction and relationship side of traditional medicine as it progresses.
Effective, integrated telehealth services are more than just GPs behind a computer screen. They essentially act as a virtual gateway to the healthcare system, providing easily accessible, affordable medical advice and a positive patient experience, which ultimately improves the patient and GP relationship.
At Webdoctor, our mantra is to “allow clinicians operate at the top of their licence” by reducing unnecessary administrative processes and freeing up their time to focus on patient outcomes. The future of this sector will see hybrid models emerge and the key to achieving success going forward for all health-tech platforms and medical practices alike will be to recognise this and integrate telemedicine into their patient’s care and journey.
What’s also evident is that there is much more growth and development still to come for the telehealth sector. We will see the continued integration of telemedicine and online GP services into everyday life.
Health professionals are excited to explore what the post-pandemic future of telehealth looks like and patients will ultimately benefit. Telehealth, with its flexibility, innovation and convenience, is most definitely here to stay.
In the aftermath of yet another racially motivated shooting that was live-streamed on social media, tech companies are facing fresh questions about their ability to effectively moderate their platforms.
PaytonGendron, the 18-year-old gunman who killed 10 people in a largely Black neighborhood in Buffalo, New York, on Saturday, broadcasted his violent rampage on the video-game streaming service Twitch. Twitch says it took down the video stream in mere minutes, but it was still enough time for people to create edited copies of the video and share it on other platforms including Streamable, Facebook and Twitter.
So how do tech companies work to flag and take down videos of violence that have been altered and spread on other platforms in different forms – forms that may be unrecognizable from the original video in the eyes of automated systems?
On its face, the problem appears complicated. But according to Hany Farid, a professor of computer science at UC Berkeley, there is a tech solution to this uniquely tech problem. Tech companies just aren’t financially motivated to invest resources into developing it.
Farid’s work includes research into robust hashing, a tool that creates a fingerprint for videos that allows platforms to find them and their copies as soon as they are uploaded. The Guardian spoke with Farid about the wider problem of barring unwanted content from online platforms, and whether tech companies are doing enough to fix the problem.
This interview has been edited for length and clarity. Twitch, Facebook and YouTube did not immediately respond to a request for comment.
Twitch says that it took the Buffalo shooter’s video down within minutes, but edited versions of the video still proliferated, not just on Twitch but on many other platforms. How do you stop the spread of an edited video on multiple platforms? Is there a solution?
It’s not as hard a problem as the technology sector will have you believe. There’s two things at play here. One is the live video, how quickly could and should that have been found and how we limit distribution of that material.
The core technology to stop redistribution is called “hashing” or “robust hashing” or “perceptual hashing”. The basic idea is quite simple: you have a piece of content that is not allowed on your service either because it violated terms of service, it’s illegal or for whatever reason, you reach into that content, and extract a digital signature, or a hash as it’s called.
This hash has some important properties. The first one is that it’s distinct. If I give you two different images or two different videos, they should have different signatures, a lot like human DNA. That’s actually pretty easy to do. We’ve been able to do this for a long time. The second part is that the signature should be stable even if the content is being modified, when somebody changes say the size or the color or adds text. The last thing is you should be able to extract and compare signatures very quickly.
So if we had a technology that satisfied all of those criteria, Twitch would say, we’ve identified a terror attack that’s being live-streamed. We’re going to grab that video. We’re going to extract the hash and we are going to share it with the industry. And then every time a video is uploaded with the hash, the signature is compared against this database, which is being updated almost instantaneously. And then you stop the redistribution.
How do tech companies respond right now and why isn’t it sufficient?
It’s a problem of collaboration across the industry and it’s a problem of the underlying technology. And if this was the first time it happened, I’d understand. But this is not, this is not the 10th time. It’s not the 20th time. I want to emphasize: no technology’s going to be perfect. It’s battling an inherently adversarial system. But this is not a few things slipping through the cracks. Your main artery is bursting. Blood is gushing out a few liters a second. This is not a small problem. This is a complete catastrophic failure to contain this material. And in my opinion, as it was with New Zealand and as it was the one before then, it is inexcusable from a technological standpoint.
But the companies are not motivated to fix the problem. And we should stop pretending that these are companies that give a shit about anything other than making money.
Talk me through the existing issues with the tech that they are using. Why isn’t it sufficient?
I don’t know all the tech that’s being used. But the problem is the resilience to modification. We know that our adversary – the people who want this stuff online – are making modifications to the video. They’ve been doing this with copyright infringement for decades now. People modify the video to try to bypass these hashing algorithms. So [the companies’] hashing is just not resilient enough. They haven’t learned what the adversary is doing and adapted to that. And that is something they could do, by the way. It’s what virus filters do. It’s what malware filters do. [The] technology has to constantly be updated to new threat vectors. And the tech companies are simply not doing that.
Why haven’t companies implemented better tech?
Because they’re not investing in technology that is sufficiently resilient. This is that second criterion that I described. It’s easy to have a crappy hashing algorithm that sort of works. But if somebody is clever enough, they’ll be able to work around it.
When you go on to YouTube and you click on a video and it says, sorry, this has been taken down because of copyright infringement, that’s a hashing technology. It’s called content ID. And YouTube has had this technology forever because in the US, we passed the DMCA, the Digital Millennium Copyright Act that says you can’t host copyright material. And so the company has gotten really good at taking it down. For you to still see copyright material, it has to be really radically edited.
So the fact that not a small number of modifications passed through is simply because the technology’s not good enough. And here’s the thing: these are now trillion-dollar companies we are talking about collectively. How is it that their hashing technology is so bad?
These are the same companies, by the way, that know just about everything about everybody. They’re trying to have it both ways. They turn to advertisers and tell them how sophisticated their data analytics are so that they’ll pay them to deliver ads. But then when it comes to us asking them, why is this stuff on your platform still? They’re like, well, this is a really hard problem.
The Facebook files showed us that companies like Facebook profit from getting people to go down rabbit holes. But a violent video spreading on your platform is not good for business. Why isn’t that enough of a financial motivation for these companies to do better?
I would argue that it comes down to a simple financial calculation that developing technology that is this effective takes money and it takes effort. And the motivation is not going to come from a principled position. This is the one thing we should understand about Silicon Valley. They’re like every other industry. They are doing a calculation. What’s the cost of fixing it? What’s the cost of not fixing it? And it turns out that the cost of not fixing is less. And so they don’t fix it.
Why is it that you think the pressure on companies to respond to and fix this issue doesn’t last?
We move on. They get bad press for a couple of days, they get slapped around in the press and people are angry and then we move on. If there was a hundred-billion-dollar lawsuit, I think that would get their attention. But the companies have phenomenal protection from the misuse and the harm from their platforms. They have that protection here. In other parts of the world, authorities are slowly chipping away at it. The EU announced the Digital Services Act that will put a duty of care [standard on tech companies]. That will start saying, if you do not start reining in the most horrific abuses on your platform, we are going to fine you billions and billions of dollars.
[The DSA] would put pretty severe penalties for companies, up to 6% of global profits, for failure to abide by the legislation and there’s a long list of things that they have to abide by, from child safety issues to illegal material. The UK is working on its own digital safety bill that would put in place a duty of care standard that says tech companies can’t hide behind the fact that it’s a big internet, it’s really complicated and they can’t do anything about it.
And look, we know this will work. Prior to the DMCA it was a free-for-all out there with copyright material. And the companies were like, look, this is not our problem. And when they passed the DMCA, everybody developed technology to find and remove copyright material.
It sounds like the auto industry as well. We didn’t have seat belts until we created regulation that required seat belts.
That’s right. I’ll also remind you that in the 1970s there was a card called a Ford Pinto where they put the gas tank in the wrong place. If somebody would bump into you, your car would explode and everybody would die. And what did Ford do? They said, OK, look, we can recall all the cars, fix the gas tank. It’s gonna cost this amount of dollars. Or we just leave it alone, let a bunch of people die, settle the lawsuits. It’ll cost less. That’s the calculation, it’s cheaper. The reason that calculation worked is because tort reform had not actually gone through. There were caps on these lawsuits that said, even when you knowingly allow people to die because of an unsafe product, we can only sue you for so much. And we changed that and it worked: products are much, much safer. So why do we treat the offline world in a way that we don’t treat the online world?
For the first 20 years of the internet, people thought that the internet was like Las Vegas. What happens on the internet stays on the internet. It doesn’t matter. But it does. There is no online and offline world. What happens on the online world very, very much has an impact on our safety as individuals, as societies and as democracies.
There’s some conversation about duty of care in the context of section 230 here in the US – is that what you envision as one of the solutions to this?
I like the way the EU and the UK are thinking about this. We have a huge problem on Capitol Hill, which is, although everybody hates the tech sector, it’s for very different reasons. When we talk about tech reform, conservative voices say we should have less moderation because moderation is bad for conservatives. The left is saying the technology sector is an existential threat to society and democracy, which is closer to the truth.
So what that means is the regulation looks really different when you think the problem is something other than what it is. And that’s why I don’t think we’re going to get a lot of movement at the federal level. The hope is that between [regulatory moves in] Australia, the EU, UK and Canada, maybe there could be some movement that would put pressure on the tech companies to adopt some broader policies that satisfy the duty here.