When she first began talking to her peers in the House of Lords about the rights of children on the internet, Baroness Kidron says she looked like “a naysayer”, like someone who was “trying to talk about wooden toys” or, in her husband’s words, like “one middle-aged woman against Silicon Valley”. It was 2012 and the film-maker and recently appointed life peer was working on her documentary InRealLife, spending “hundreds of hours in the bedrooms of children” to discover how the internet affects young lives. What she saw disturbed her.
“I did what they were doing – gaming, falling in love, watching pornography, going to meet-ups, making music – you name it, it happened,” Beeban Kidron says. The film explored everything from children’s exposure to porn, to rampant online bullying, to the way privacy is compromised online. But Kidron noticed that one thing underpinned it all: on the internet, nobody knows you’re a kid. “Digital services and products were treating them as if they were equal,” she says. “The outcome of treating everyone equally is you treat a kid like an adult.”
Almost a decade later, Kidron has pushed through a Children’s Code that hopes to change this landscape for ever. The Age Appropriate Design Code, an amendment to the 2018 Data Protection Act, came into effect this month. It requires online services to “put the best interests of the child first” when designing apps, games, websites and internet-connected toys that are “likely” to be used by kids.
In total, there are 15 standards that companies need to adhere to in order to avoid being fined up to 4% of their global turnover. These include offering “bite-size” terms and conditions for children; giving them “high privacy” by default; turning off geolocation and profiling; and avoiding “nudge techniques” that encourage children to turn off privacy settings. The code, which will be enforced by the Information Commissioner’s Office (ICO), also advises against “using personal data in a way that incentivises children to stay engaged”, such as feeding children a long string of auto-playing videos one after the other.
The code was introduced in September 2020, but offered companies a 12-month transition period, in this time the world’s tech giants have seemingly begun responding to the sting of Kidron’s sling. Instagram now prevents adults from messaging children who don’t follow them on the app, while anyone under 16 who creates an account will have it set to private by default. TikTok has implemented a bedtime for notifications; teens aged 13-15 will no longer be pinged after 9pm. Meanwhile, YouTube has turned off autoplay for users aged 13-17, while Google has blocked the targeted advertising of under-18s.
But hang on, why does TikTok’s bedtime only apply to those 13 and over? Are toddlers OK to use the app until 2am? You’ve just spotted the first flaw in the plan. While social media sites require users to be at least 13 to sign up for their services (in line with America’s 21-year-old Children’s Online Privacy Protection Act), a quick glance at reality shows that kids lie about their age in order to snap, share and status-update. Creating a system in which children can’t lie, by, for example, necessitating that they provide ID to access an online service, ironically risks compromising their privacy further.
“There is nothing that stops us having a very sophisticated age-check mechanism in which you don’t even know the identity of the person, you just know that that they’re 12,” Kidron argues, pointing to a report on age verification that she recently worked on with her organisation 5Rights Foundation, entitled But how do they know it is a child?. Third-party providers, for example, could confirm someone’s identity without passing on the data to tech giants, or capacity testing could allow websites to estimate someone’s age based on whether they can solve a puzzle (no prizes for figuring out the numerous ways that could go wrong).
Whatever the solution, Kidron is currently working on a private member’s bill that sets minimum standards of age assurance, thereby preventing companies from choosing their own “intrusive, heavy handed or just terrible, lousy, and ineffective” techniques.
How did Kidron go from looking like a “naysayer” to changing the landscape so drastically? Kidron began making documentaries in the 80s before working in Hollywood (most notably directing the Bridget Jones sequel The Edge of Reason). After becoming a baroness, she founded the 5Rights Foundation to fight for children’s digital rights. She says she had her “early adopters” in parliament, including the archbishop of York, Stephen Cottrell, Conservative peer Dido Harding and Liberal Democrat peer Timothy Clement-Jones. “That was my gang,” Kidron says, but others remained sceptical for years. “The final set of people only came on board this summer, once they saw what the tech companies were doing.”
The Children’s Code as a whole defines a child as anyone under 18, in line with the United Nations Convention on the Rights of the Child (UNCRC). For Kidron, it’s about much more than privacy – “a child’s right to unfettered access to different points of view is actually taken away by an algorithmic push for a particular point of view,” she argues, also noting that the right to the best possible health is removed when companies store and sell data about children’s mental health. “It’s nothing short of a generational injustice,” she says. “Here was this technology that was purporting to be progressive, but in relation to children it was regressive – it was taking away the existing rights and protections.”
How did these claims go down in Silicon Valley? Conversations with executives were surprisingly “very good and productive”, according to Kidron, but she ultimately realised that change would have to be forced upon tech companies. “They have an awful lot of money to have an awful lot of very clever people say an awful lot of things in an awful lot of spaces. And then nothing happens,” she says. “Anyone who thinks that the talk itself is going to make the change is simply wrong.”
And yet while companies must now comply with the code, even Kidron admits, “they have to comply in ways that they determine”. TikTok’s bedtime, for example, seems both arbitrary and easy to get around (children are well versed in changing the date and time on their devices to proceed in video games). Yet Kidron says the exact o’clock is irrelevant – the policy is about targeting sleeplessness in children, which in turn enables them to succeed at school. “These things seem tiny… but they’re not. They’re about the culture and they’re about how children live.”
As for children working their way around barriers, Kidron notes that transgression is part of childhood, but “you have to allow kids to transgress, you can’t just tell them it’s really normal”. “The problem we have is kids who are eight are looking at hardcore, violent, misogynistic porn and there’s no friction in the system to say, ‘Actually, that’s not yours.’”
Yet problems also arise when we allow tech companies, not parents, to set boundaries for our children. In 2017, YouTube came under fire after its parental controls blocked children from seeing content made by LGBTQ+ creators (YouTube initially apologised for the “confusion” and said only videos that “discuss more sensitive issues” would be restricted in the future). Kidron says she’s “not a big takedown freak” and is “committed to the idea that children have rights to participate”, but can the same be said of companies hoping to avoid fines? Numerous American websites remain inaccessible in Europe after the implementation of General Data Protection Regulation (GDPR) laws in 2018, with companies preferring to restrict access rather than adapt.
For now, it remains to be seen how the Children’s Code will be enforced in practice; Kidron says it’s “the biggest redesign of tech since GDPR”, but in December 2020 a freedom of information request revealed that more than half of GDPR fines issued by the ICO remain unpaid.
Still, Kidron is certain of one thing: that tech companies are “disordering the world” with their algorithms – “making differences of their terms for people who are popular and have a lot of followers versus those who are not” and “labelling things that get attention without really thinking about what that attention is about”. These are prescient remarks: a day after we speak, the Wall Street Journal revealed that Facebook has a program that exempts high-profile users from its rules and has also published internal studies demonstrating that Instagram is harmful to teens. One internal presentation slide read: “We make body image issues worse for one in three teen girls.” Instagram’s head of public policy responded to the report in a blog post, writing: “The story focuses on a limited set of findings and casts them in a negative light.”
Whether or not Kidron was once “one middle-aged woman against Silicon Valley”, today she has global support. The recent changes implemented by social media companies are not just UK-based, but have been rolled out worldwide. Kidron says her code is a Trojan horse, “starting the conversation that says, you can regulate this environment”.
But this Trojan horse is only beginning to open up. “We had 14 Factory Acts in the 19th century on child labour alone,” Kidron says, adding that the code is likely to be the first of many more regulations to come. “I think today we air punch,” she says, when asked how it feels to have led the charge for change. “Tomorrow, we go back to work.”
Despite a record number of publicly disclosed security flaws in 2021, Microsoft managed to improve its stats, according to research from BeyondTrust.
Figures from the National Vulnerability Database (NVD) of the US National Institute of Standards and Technology (NIST) show last year broke all records for security vulnerabilities. By December, according to pentester Redscan, 18,439 were recorded. That’s an average of more than 50 flaws a day.
However just 1,212 vulnerabilities were reported in Microsoft products last year, said BeyondTrust, a 5 percent drop on the previous year. In addition, critical vulnerabilities in the software (those with a CVSS score of 9 or more) plunged 47 percent, with the drop in Windows Server specifically down 50 percent. There was bad news for Internet Explorer and Edge vulnerabilities, though: they were up 280 percent on the prior year, with 349 flaws spotted in 2021.
BeyondTrust commented that analysis had been simplified by Microsoft’s move to the Common Vulnerability Scoring System (CVSS), although an unfortunate side effect meant that security gurus can now determine the impact of administrative rights on critical vulnerabilities.
“From 2015 to 2020,” said the report, “removing admin rights could have mitigated, on average, 75 percent of critical vulnerabilities.”
It’s a very good point: keeping permissions to the bare minimum is excellent practice, although difficult to enforce.
The decline in vulnerabilities marks a change for Microsoft. In 2016, the count of vulnerabilities stood at 451, according to the report. By 2020 they had leapt to 1,268. A drop, even if only to 1,212, is a first. It’s just as well since between 2019 and 2020, there was a 48 percent rise in vulnerabilities year on year.
And the trendiest categories are…
The report also drilled into vulnerability categories. Topping the table with 326 and 588 vulnerabilities respectively were Remote Code Execution and Elevation of Privilege flaws, with the latter up from 559 in 2020. RCE was itself down in 2021 from 345 in the prior year.
Explaining the apparent explosion in Edge and Internet Explorer numbers (349 vulnerabilities up from 92 in 2020), BeyondTrust pointed to a consolidation in the browser market and a renewed focus on browser attacks as exploited plugins (such as Flash) were dropped and bug bounties made reporting vulnerabilities more financially attractive. It also pointed out that only six were critical (a record low).
The decline in Windows vulnerabilities was attributed to Microsoft’s efforts to improve the security architecture of its supported products, as was the fall in Windows Server holes. The move from security as an afterthought to something front and center is also a factor, even if it has taken a few iterations of operating systems.
That said, there were some spectacular holes in the company’s products during 2021. Last year’s Exchange Server vulnerabilities, for example, left many administrators scrambling to patch systems. 2021’s stability, from the standpoint of Microsoft’s vulnerabilities, must be considered alongside the rapid rises of previous years.
As the report authors note, simply patching the problems might not deal with the underlying issues. Removing admin rights and privileges also play a part in reducing the attack surface. ®
The new Ford Geofencing Speed Limit Control system alerts a driver when the car breaks a speed limit – then slows down the vehicle.
Speed limit signs may soon be a thing of the past as Ford is now trialling connected vehicle technology that can automatically reduce a car’s speed in certain zones to improve road safety.
Up to 29pc of all road fatalities in Europe, depending on the country, are pedestrians and cyclists, according to a 2020 report by the European Transport Safety Council. Setting up speed limits in certain areas is one of the frontline measures to minimise road accidents.
Now, US carmaker Ford is testing its new Geofencing Speed Limit Control system across two German cities, Cologne and Aachen, to see if the technology can help in making roads safer, preventing fines for drivers and improving the appearance of roadsides.
A geofence is a virtual parameter in a real-world area. It is often used by mobility companies and start-ups, such as Ireland’s Zipp Mobility, to identify and enforce low-speed zones in cities.
How does it work?
Ford’s new system uses geofencing technology to alert a driver through the dashboard when the vehicle enters an area with a designated speed limit. It then lowers the vehicle speed to match the limit automatically.
However, the driver can override the automated system and deactivate speed limit control at any time. They can also use the technology to set their own geofencing zones at speed as low as 20kmph.
“Connected vehicle technology has the proven potential to help make everyday driving easier and safer to benefit everyone, not just the person behind the wheel,” said Michael Huynh, manager of City Engagement Germany at Ford Europe.
“Geofencing can ensure speeds are reduced where – and even when – necessary to help improve safety and create a more pleasant environment.”
Ford already has in-built assistance technologies that help drivers ensure they are abiding by speed limits. However, the new geofencing speed limit control system is the first that can automatically reduce a vehicle’s speed without the driver’s intervention.
Eyes on the road
The year-long trial that runs until March 2023 is collaboration between the Ford City Engagement team, city officials in Cologne and Aachen, and Ford software engineers in Palo Alto, California.
Together with colleagues in Aachen, the Palo Alto engineers developed technology that connects the vehicle to the geofencing system for GPS tracking and data exchange.
Germany has more than 1,000 types of road signs, which can often confuse drivers and distract them from the road ahead. Geofencing technologies such as the new Ford system can help drivers stay focused.
“Our drivers should benefit from the latest technical support, including geofencing based assistant systems that enable them to keep to the speed limits and fully concentrate on the road,” said Dr Bert Schröer of AWB, a Cologne waste disposal company involved in the trial.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.
Welcome to Pushing Buttons, the Guardian’s gaming newsletter. If you’d like to receive it in your inbox every week, just pop your email in below – and check your inbox (and spam) for the confirmation email.
Remember how, in the wake of yet more awful shootings in the US this month, Fox News decided to blame video games rather than, you know, the almost total absence of meaningful gun control? Remember how I said last week that the video-games-cause-violence “argument” was so mendacious and nakedly manipulative that I wasn’t going to dignify it with a response?
Well, here I am, responding, because the supposed link between video games and real-life violence is one of the most persistent myths that I’ve encountered over the course of my career, and it has an interesting (if also infuriating) history.
Many video games have violent content, just as many films and TV series have violent content (and of course many books, as anyone who has endured a Bret Easton Ellis novel will attest). And it makes intuitive sense that the interactivity of games – especially shooting games – might appear more troubling, from the outside, than passive media such as film. (I gotta say, though, that in 25 years of playing video games I have never seen a scene as violent or upsetting as, say, a Quentin Tarantino movie.)
But the idea that exposure to these violent games turns people into killers in real life is comprehensively false – and it deflects attention from the actual drivers of real-world violence, from inequality to access to firearms to online radicalisation. It is a very politically motivated argument, and one that makes me instantly suspicious of the person wielding it. The NRA, for instance, trots it out on the regular. Donald Trump, inciter of actual real-life violent riots, was fond of it too. Why might that be, I wonder?
First, the facts: there is no scientifically credible link between video games and real-life violence. A lot of the studies around this issue are, in a word, bad – small sample sizes, lab conditions that have no relation to how people engage with games in the real world – but the best we have show either no link at all between violent games and violent thoughts or behaviour, or a positive correlation so minuscule as to be meaningless. A review of the science in 2020, which looked at and re-evaluated 28 global studies of video games and violence, found no cumulative harm, no long-term effect, and barely even any short-term effect on aggression in the real world. It concluded that the “long-term impacts of violent games on youth aggression are near zero”.
This seems self-evident: video games have been a part of popular culture for at least 50 years, since Pong, and violent games have existed in some form since Space Invaders, though they’ve gotten more visually realistic over time. If video games were in some way dangerous – if they significantly affected our behaviour, our emotional responses – you would expect to have seen widespread, cross-cultural changes in how we act. That is demonstrably not the case. Indeed, overall, violent crime has been decreasing for more than 20 years, the exact period of time during which games have become ubiquitous. Though it would be unscientific to credit video games with that effect, you would think that if the generations of people who’ve now played Doom or Call of Duty or Grand Theft Auto were warped by it, we might be seeing some evidence of that by now.
It is true that some perpetrators of mass murders – such as the Columbine shooters – were fans of video games. But given that the great majority of teenagers are fans of video games, that doesn’t mean much. More often than a fixation on violent media – of all kinds – mass shooters display an obsession with weapons or explosives or real-life killers, an interest in extremist views, social ostracisation. These are not otherwise well-adjusted people suddenly compelled to real-world violence by a game, or a film, or a Marilyn Manson album.
The history of the “video games cause violence” argument goes back even further than video games themselves: it’s an extension of the panic that flares up whenever a new and supposedly morally abject form of youth culture emerges. In the 1940s, when New York’s mayor ordered 2,000 pinball machines to be seized so that he could performatively smash them up, it was arcades; during the satanic panic of the 1980s and beyond, it was metal music. Since the mid to late 90s, it’s been video games, and no amount of studies debunking any link between them and real-world violence seems to make a difference.
So why does this argument keep showing up? In short: because it’s an easy scapegoat that ties into older generations’ instinctive wariness of technology, screen time and youth culture, and it greatly benefits institutions like the NRA and pro-gun politicians to have a scapegoat. Whenever video games are implicated in a violent event, there is usually stunning hypocrisy on display. After the El Paso shooting in 2019, Walmart removed violent video game displays from its stores – but continued to sell actual guns. Fox News, the TV network that platforms Tucker Carlson and the great replacement theory with him, is happy to point out that the perpetrator of a mass shooting played video games, while remaining oddly quiet on the racist ideas that show up in these shooters’ manifestos.
I’m not saying that we shouldn’t examine video game violence at all, or question it. Does every game that involves sneaking up on enemies need a gratuitous neck-breaking animation when you succeed in overpowering a guard? Why do games so often resort to violence as the primary method of interaction with a virtual world? Do we really need more violent media – couldn’t we be playing something more interesting than another military shooter? These are valid and interesting questions. But they have nothing to do with real-world violence.
What to play
Back in 1994, video game magazine Edge ended its review of Doom with this infamous line: “If only you could talk to these creatures, then perhaps you could try and make friends with them, form alliances… Now that would be interesting.” Nearly 30 years later, “talk to the monsters” jokes and memes still crop up, even if nobody remembers where it originally came from.
Turns out that reviewer had a point, though, as proved by 2015’s Undertale, probably the most interesting anti-violent video game I’ve played. In this lo-fi role-playing game, you get into fights with plenty of monsters, but instead of battering them into submission you can win them over by talking them down and showing them mercy, which is often the more difficult option. In most games, there’s no question about what you do when a monster turns up in your path: this one makes you interrogate yourself. I interpreted it at the time as social commentary on pacifism and community, and looking back, I don’t think that was too much of an overreach.
Available on: PC, PlayStation 4, Xbox One, Nintendo Switch Approximate play time: 6-10 hours
What to read
I’m going to start with a book this time: Lost in a Good Game: Why We Play Video Games and What They Can Do For Us, by Pete Etchells. A researcher and lecturer in biological psychology, Etchells’ perspective on video games is both relatable and extremely well-informed. He looks at the evidence (or lack of evidence) behind all the most pervasive beliefs about video games, and in the end he makes the case that most of the effects that they have on individuals and society are actually positive. It’s a reassuring read that I often recommend to worried parents who don’t play games themselves.
Grand Theft Auto V, perhaps the poster child for morally bankrupt video games that supposedly corrupt the youth, has now sold 165 million copies, following its launch on PS5 and Xbox Series X earlier this year. This makes it one of the most popular entertainment products of all time in any medium, and yet strangely, in the nine years since it was released, we have not seen the emergence of roving gangs of teenagers looking to act out their chaotic GTA Online shootouts in real life. Funny that.