The last month has brought a flurry of changes to major tech platforms related to child safety online, and specifically to the use and protection of children’s personal data.
First, there was Instagram. In late July, Facebook announced some sweeping changes to the platform, billed as “giving young people a safer, more private experience”. The company began giving those under 16 private accounts by default, ensuring that kids only share content publicly if they actively dive into settings and change their privacy preferences accordingly.
It also introduced a new set of restrictions for people with “potentially suspicious accounts” – “accounts belonging to adults that may have recently been blocked or reported by a young person for example.” In other words, if you’re a creep who goes around messaging kids, you’ll soon find that young people don’t show up in your algorithmic recommendations; you won’t be able to add them as friends; you won’t be able to comment on their posts; and you won’t be able to read comments others have left.
Finally, the platform announced “changes to how advertisers can reach young people with ads”. People under 18 can now only be targeted on Instagram by “their age, gender and location”: the vast surveillance apparatus that Facebook has built will not be made available to advertisers. Instagram’s rationale for this is that, while the platform “already [gives] people ways to tell us that they would rather not see ads based on their interests or on their activities on other websites and apps … young people may not be well equipped to make these decisions.”
At the time, I found that last change the most interesting one by far, because of the implicit claim it was making: that it’s bad to target people with adverts if you’re not absolutely certain that’s what they want. Facebook would hardly accept that targeted advertising can be harmful, so why, I wondered, was it suddenly so keen to make sure that young people weren’t hit by it?
Along came Google
Then YouTube announced a surprisingly similar set of changes, and everything started to make a bit more sense. Again, the default privacy settings were updated for teen users: now, videos they upload will be private by default, with users under 18 having to manually dig into settings to publish their posts to the world.
Again, advertising is being limited, with the company stepping in to remove “overly commercial content” from YouTube Kids, an algorithmically curated selection of videos that are supposedly more child-friendly than the main YouTube catalogue. In YouTube proper, it’s updated the disclosures that appear on “made for kids” content that contain paid promotions. (Paid promotions are banned on YouTube Kids, so despite being officially “Made for kids” such content isn’t allowed on the platform explicitly for kids. Such is the way of YouTube).
And YouTube also introduced a third change, adding and updating its “digital wellbeing” features. “We’ll be turning to take a break and bedtime reminders on by default for all users ages 13-17 on YouTube,” the company said. “We’ll also be turning autoplay off by default for these users.” Both these settings can again be overruled by users who want to change them, but they will provide a markedly different experience by default for kids on the platform.
TikTok will prevent teenagers from receiving notifications past their bedtime, the company said … [It] will no longer send push notifications after 9pm to users aged between 13 and 15. For 16-year-olds and those aged 17 notifications will not be sent after 10pm.
People aged 16 and 17 will now have direct messages disabled by default, while those under 16 will continue to have no access to them at all. And all users under 16 will now be prompted to choose who can see their videos the first time they post them, ensuring they do not accidentally broadcast to a wider audience than intended.
It’s probably not a coincidence that three of the largest social networks in the world all announced a raft of child-safety features in the summer of 2021. So what could have prompted the changes?
Well, in just over two weeks’ time, the UK is going to begin enforcing the age appropriate design code, one of the world’s most wide-ranging regulations controlling the use of children’s data. We’ve talked about it before on the newsletter, in one of the B-stories in July, and I covered it in this Observer story:
The code, which was introduced as part of the same legislation that implemented GDPR in the UK, sees the Information Commissioner’s Office laying out a new standard for internet companies that are ‘likely to be accessed by children’. When it comes into force in September this year, the code will be comprehensive, covering everything from requirements for parental controls to restrictions on data collection and bans on “nudging” children to turn off privacy protections.
I asked the platforms whether the changes were indeed motivated by the age appropriate design code. A Facebook spokesperson said: “This update wasn’t based on any specific regulation, but rather on what’s best for the safety and privacy of our community. It’s the latest in a series of things we’ve introduced over recent months and years to keep young people safe on our platforms (which have been global changes, not just UK).”
TikTok declined to comment on whether the changes were prompted by the code, but I understand that they were – though the company is rolling them out globally because, once it built the features, it felt it was the right thing to do. And according to Google, the updates were core to the company’s compliance with the AADC, and the company said it was aiming beyond any single regulation – but also wouldn’t comment on the record.
I also called up Andy Burrows, the head of child safety online policy at the NSPCC, who shared my scepticism at claims that the timing of these launches could be coincidental. “It is no coincidence that the flurry of announcements that we’ve seen comes just weeks before the age appropriate design comes into effect,” he said, “and I think it’s a very clear demonstration that regulation works.”
The lack of public acknowledgment from the companies that regulation has influenced their actions is in stark contrast to the response to GDPR two years ago, when even Facebook had to acknowledge that it didn’t suddenly introduce a whole array of privacy options out of the goodness of its heart. And the silence has correspondingly led to an odd gap at the heart of coverage of these changes: they’ve had widespread coverage in the tech press, as well as many mainstream American papers, with barely a whisper of acknowledgment that they are almost certainly down to a regulatory limitation in a mid-sized European market.
That, of course, is exactly how the tech companies would want it. Recognising that even a country as comparatively minor as the UK can still pass regulations that affect how platforms work globally is a shift in the power relationships between multinational companies and national governments, and one that might spark other nations to reassess their own ability to force changes upon tech companies.
Not that everyone is fully compliant with the age appropriate design code. The big unanswered question is around verification, Burrows points out: “The code is going to require age assurance, and so far we haven’t seen publicly many, or indeed any, of the big players set out how they’re going to comply with that, which clearly is a significant challenge.” In everything I’ve written above – every single restriction on teen accounts – the platforms are fundamentally relying on children to be honest as part of the sign-up process. It’s hard to verify someone’s age online, but very soon UK law isn’t going to take “it’s hard” as a sufficient excuse. The next few weeks are going to be interesting.
If you want to read the complete version of this newsletter please subscribe to receive TechScape in your inbox every Wednesday.
A non-fungible token (NFT) marketplace has introduced policies to ban insider trading, after an executive at the company was discovered to be buying artworks shortly before they were promoted on the site’s front page.
OpenSea, one of the leading sites for trading the digital assets, will now prevent team members buying or selling from featured collections and from using confidential information to trade NFTs. Neither practice was previously banned.
“Yesterday we learned that one of our employees purchased items that they knew were set to display on our front page before they appeared there publicly,” said Devin Finzer, the co-founder and chief executive of the site.
“This is incredibly disappointing. We want to be clear that this behaviour does not represent our values as a team. We are taking this very seriously and are conducting an immediate and thorough third-party review of this incident so that we have a full understanding of the facts and additional steps we need to take.”
NFTs are digital assets whose ownership is recorded and traced using a bitcoin-style blockchain. The NFT market boomed earlier this year as celebrities including Grimes, Andy Murray and Sir Tim Berners-Lee sold collectibles and artworks using the format. But the underlying technology has questionable utility, with some dismissing the field as a purely speculative bubble.
The insider trading came to light thanks to the public nature of the Ethereum blockchain, on which most NFT trades occur. Crypto traders noticed that an anonymous user was regularly buying items from the public marketplace shortly before they were promoted on the site’s front page, a prestigious slot that often brings significant interest from would-be buyers. The anonymous user would then sell the assets on, making vast sums in a matter of hours.
One trade, for instance, saw an artwork called Spectrum of a Ramenification Theory bought for about £600. It was then advertised on the front page and sold on for $4,000 a few hours later.
One Twitter user, ZuwuTV, linked the transactions to the public wallet of Nate Chastain, OpenSea’s head of product, demonstrating, using public records, that the profits from the trades were sent back to a wallet owned by Chastain.
While some, including ZuwuTV, described the process as “insider trading”, the loosely regulated market for NFTs has few restrictions on what participants can do. Some critics argue that even that terminology demonstrates that the sector is more about speculation than creativity.
“The fact that people are responding to this as insider trading shows that this is securities trading (or just gambling), not something designed to support artists,” said Anil Dash, the chief executive of the software company Glitch. “There are no similar public statements when artists get ripped off on the platform.
“If Etsy employees bought featured products from creators on their platform (or Patreon or Kickstarter workers backed new creators etc) that’d be great! Nobody would balk. Because they’d be supporting their goal,” Dash added.
Sir Clive Sinclair died on Thursday at home in London after a long illness, his family said today. He was 81.
The British entrepreneur is perhaps best known for launching the ZX range of 8-bit microcomputers, which helped bring computing, games, and programming into UK homes in the 1980s, at least. This included the ZX80, said to be the UK’s first mass-market home computer for under £100, the ZX81, and the trusty ZX Spectrum. A whole generation grew up in Britain mastering coding on these kinds of systems in their bedrooms.
And before all that, Sir Clive founded Sinclair Radionics, which produced amplifiers, calculators, and watches, and was a forerunner to his Spectrum-making Sinclair Research. The tech pioneer, who eventually sold his computing biz to Amstrad, was knighted during his computing heyday, in 1983.
“He was a rather amazing person,” his daughter, Belinda Sinclair, 57, told The Guardian this evening. “Of course, he was so clever and he was always interested in everything. My daughter and her husband are engineers so he’d be chatting engineering with them.”
Sir Clive is survived by Belinda, his sons, Crispin and Bartholomew, aged 55 and 52 respectively, five grandchildren, and two great-grandchildren. ®
‘AI tech can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights.’
The UN’s human rights chief Michelle Bachelet called for a moratorium on the sale and use of artificial intelligence technology until safeguards are put in place to prevent potential human rights violations.
Bachelet made the appeal on Wednesday (15 September) to accompany a report released by the UN’s Human Rights Office, which analysed how AI systems affect people’s right to privacy. The violation of their privacy rights had knock-on impacts on other rights such as rights to health, education and freedom of movement, the report found.
“Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said.
“Artificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states,” Bachelet added.
The report was critical of justice systems which had made wrongful arrests because of flawed facial recognition tools. It appealed to countries to ban any AI tools which did not meet international human rights standards. A 2019 study from the UK found that 81pc of suspects flagged by the facial recognition technology used by London’s Metropolitan Police force were innocent.
Bachelet also highlighted the report’s concerns on the future use of data once it has been collected and stored, calling it “one of the most urgent human rights questions we face.”
The UN’s report echoes previous appeals made by European data protection regulators.
The European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) called for a ban on facial recognition in public places in June. They urged EU lawmakers to consider banning the use of such technology in public spaces, after the European Commission released its proposed regulations on the matter.
The EU’s proposed regulations did not recommend an outright ban. The commission instead emphasised the importance of creating “trustworthy AI.”