Google’s effort to build a “Privacy Sandbox” – a set of technologies for delivering personalized ads online without the tracking problems presented by cookie-based advertising – continues to struggle with its promise of privacy.
The Privacy Sandbox consists of a set of web technology proposals with bird-themed names intended to aim interest-based ads at groups rather than individuals.
Much of this ad-related data processing is intended to occur within the browsers of internet users, to keep personal information from being spirited away to remote servers where it might be misused.
So, simply put, the aim is to ensure decisions made on which ads you’ll see, based on your interests, take place in your browser rather than in some backend systems processing your data.
Google launched the initiative in 2019 after competing browser makers began blocking third-party cookies – the traditional way to deliver targeted ads and track internet users – and government regulators around the globe began tightening privacy rules.
The ad biz initially hoped that it would be able to develop a replacement for cookie-based ad targeting by the end of 2021.
But after last month concluding the trial of its flawed FLoC – Federated Learning of Cohorts – to send the spec back for further refinement and pushing back its timeline for replacing third-party cookies with Privacy Sandbox specs, Google now acknowledges that its purportedly privacy-protective remarketing proposal FLEDGE – First Locally-Executed Decision over Groups Experiment – also needs a tweak to prevent the technology from being used to track people online.
On Wednesday, John Mooring, senior software engineer at Microsoft, opened an issue in the GitHub repository for Turtledove (now known as FLEDGE) to describe a conceptual attack that would allow someone to craft code on webpages to use FLEDGE to track people across different websites.
That runs contrary to its very purpose. FLEDGE is supposed to enable remarketing – for example, a web store using a visitor’s interest in a book to present an ad for that book on a third-party website – without tracking the visitor through a personal identifier.
Michael Kleber, the Google mathematician overseeing the construction of Privacy Sandbox specs, acknowledged that the sample code could be abused to create an identifier in situations where there’s no ad competition.
“This is indeed the natural fingerprinting concern associated with the one-bit leak, which FLEDGE will need to protect against in some way,” he said, suggesting technical interventions and abuse detection as possible paths to resolve the privacy leak. “We certainly need some approach to this problem before the removal of third-party cookies in Chrome.”
In an email to The Register, Dr Lukasz Olejnik, independent privacy researcher and consultant, emphasized the need to ensure that the Privacy Sandbox does not leak from the outset.
It will all be futile if the candidates for replacements are not having an adequate privacy level on their own
“Among the goals of Privacy Sandbox is to make advertising more civilized, specifically privacy-proofed,” said Olejnik. “To achieve this overarching goal, plenty of changes must be introduced. But it will all be futile if the candidates for replacements are not having an adequate privacy level on their own. This is why the APIs would need to be really well designed, and specifications crystal-clear, considering broad privacy threat models.”
The problem as Olejnik sees it is that the privacy characteristics of the technology being proposed are not yet well understood. And given the timeline for this technology and revenue that depends on it – the global digital ad spend this year is expected to reach $455bn – he argues data privacy leaks need to be identified in advance so they can be adequately dealt with.
“This particular risk – the so-called one-bit leak issue – has been known since 2020,” Olejnik said. “I expect that a solution to this problem will be found in the fusion of API design (i.e. Turtledove and Fenced Frames), implementation level, and the auditing manner – active search for potential misuses.
“But this particular issue indeed looks serious – a new and claimed privacy-friendly solution should not be introduced while being aware of such a design issue. In this sense, it’s a show-stopper, but one that is hopefully possible to duly address in time.” ®