Apple has launched the latest version of its operating system, iOS 14.5, which features the much-anticipated app tracking transparency function, bolstering the tech giant’s privacy credentials.
But iOS 14.5 also introduced support for the new Apple AirTag, which risks doing the opposite.
For the uninitiated, an AirTag is a small device (similar to a Tile) that can be attached to personal items such as keys, wallets or luggage. The tag periodically sends messages that can be used to track its location, letting you find any lost or missing items with the help of an app.
While clearly useful, AirTags can also potentially be misused. Concerns have been raised they might facilitate stalking, for example.
And there’s also a more fundamental problem with this technology. Its euphemistic description as a “crowdsourced” way to recover lost items belies the reality of how these items are tracked.
What you won’t find highlighted in the polished marketing statements is the fact that AirTags can only work by tapping into an Apple-operated surveillance network in which millions of us are unwitting participants.
So, how exactly do AirTags work?
AirTags are small, circular metal discs, slightly larger and thicker than an Australian one-dollar coin. Once paired with your Apple ID, the tag’s location will be shown in the Find My app, whenever location data are available.
Each tag transmits a unique identifier using Bluetooth. Any compatible Apple device within range (up to 100 metres in ideal conditions) will then relay that identifier to Apple’s servers, along with its own location data. The tag’s owner can then log on to the Find My app and access those location details, and bingo – you now have a pretty good idea of where your lost bag is.
The AirTags themselves have no positional location capability – they do not contain GPS technology. Rather, they “ping” the nearest Bluetooth-enabled device and let that device’s location data do the rest.
Besides Bluetooth, AirTags also use a relatively new technology called Ultra Wideband. This feature is supported only by later Apple devices such as iPhone 11 and 12, and allows for much more precise location tracking.
This precision extends to directional finding – now your phone can literally point you towards the missing tag.
While the actual nature of the data transmitted is not too concerning (tag ID and location), what makes it worrying is the sheer scale and number of devices involved. By using an AirTag, you are effectively availing yourself of a global monitoring network containing millions and millions of devices.
Everyone’s iPhone (assuming Bluetooth is enabled) is listening for AirTags. When it “hears” one, it uploads details of that tag’s identifier and the phone’s location to Apple’s servers.
Besides any privacy concerns, this is also likely to use small amounts of your data allowance. That’s probably fine most of the time, but if you are travelling internationally you might be hit with unexpected charges if you’ve forgotten to disable data roaming.
Apple says it has implemented a range of safeguards to detect and prevent attempts to use AirTags for stalking, including an alert triggered when an AirTag seems to be accompanying someone who’s not its owner. The alert can appear on the victim’s phone (if they use an iPhone) but can also raise an audible alert on the tag itself. But these measures are relatively easy to circumvent.
One experiment showed a tag can be placed on a person and would not trigger any of the safeguards if reconnected to the stalker’s device regularly enough. This could be done by the victim returning home or within range of their stalker within a three-day window.
More concerningly, the alerts can be turned off – which a victim of domestic violence may be coerced into doing by their aggressor. What’s more, as AirTags and similar devices become more common, we will inevitably encounter more warnings of tags appearing around us. Just like other commonly encountered alerts, many users will tire of seeing them and dismiss the prompts.
It is also presumably only a matter of time until these devices are hacked and put to other nefarious purposes.
Apple isn’t the only technology company drawing unwitting users into large networks. Amazon’s Sidewalk creates a network that allows your neighbours’ doorbell to connect through your Echo device (if their wifi doesn’t extend to the front door), effectively sharing your internet connection!
All of this functionality (and the inherent privacy risks) are covered in the standard terms and conditions. That lengthy, legalese document we never read allows tech companies to hide behind the claim that we have willingly opted into all this.
Can we opt out?
A simple option to avoid your device acting as a cog in Apple’s machine is to turn off Bluetooth and location services. With Bluetooth disabled, your device won’t “see” the beacons coming from AirTags, and without location services you can’t report the proximity of the tag.
Of course, turning off this functionality means losing useful capabilities such as hands-free kits, Bluetooth speakers and satellite navigation, and of course makes it harder to find your phone if you lose it.
Ultimately, if we want to benefit from the ability to locate missing keys, wallets and luggage through AirTags, we have to accept that this is only possible through a global network of sensors – even if those sensors are our own phones.
This article was first published on the Conversation. Paul Haskell-Dowland is associate dean in computing and security at Edith Cowan University
The UK’s Competition and Markets Authority (CMA) has unveiled compliance principles to curb locally some of the sharper auto-renewal practices of antivirus software firms.
The move follows the watchdog baring its teeth at McAfee and Norton over the issue of automatically renewing contracts.
The CMA took exception to auto-renewal contracts for antivirus software that customers in the UK signed up for and found difficult to cancel. Refunds and clearer pricing information (including making sure consumers were aware that year two could well end up considerably costlier than the first) were the order of the day.
Today’s principles build on that work, and are aimed at helping antivirus companies toe the line where UK consumer law is concerned. They are a bit more detailed than a simple “stop being horrid.”
The focus remains on auto-renewing contracts, where a customer signs up for a fixed period, then is charged again for subsequent periods. The CMA acknowledges that such arrangements are convenient, but they risk the consumer being locked into an agreement they no longer want or that they get stung with higher fees at renewal time.
While the principles are intended to be helpful, lurking in the background is consumer law and the threat of a potential trip to court for vendors stepping out of line.
First up comes a requirement to make sure customers are informed about auto-renewal, rather than hiding the detail in an End User Licence Agreement (EULA) or burying it in hard-to-read text through which a user must scroll.
Price claims must be “accurate” and “not mislead your customers” – so only show discounts against the normal price. It must also be possible to turn off the auto-renew easily, keep auto-renew turned off once it is off and, if on, make sure customers are reminded in good time that an auto-renew will happen.
Getting a refund must be easier and customers should be able to change their mind when auto-renewal happens. If the customer has stopped using the product, safeguards are needed around auto-renewal.
The last principle could pose a few challenges – how does a vendor become aware that a customer is not using its product? The suggestion from the CMA is to check if software updates are being received rather than simply charging users year after year.
The Register contacted McAfee and Norton for their thoughts on the principles, and will update should the companies respond. ®
Just a few months after hitting unicorn status, Gorillas has raised another major round of funding from big-name investors.
German start-up Gorillas has raised nearly $1bn to expand its on-demand grocery delivery business.
The Series C funding round was led by Delivery Hero, the German food and grocery delivery giant that recently took a stake in Deliveroo.
Gorillas also received backing from existing investors including Coatue Management, DST Global and Tencent, as well as new investors G Squared, Alanda Capital, Macquarie Capital, MSA Capital and Thrive Capital.
The fresh funding comes just a few months after the company’s $290m Series B, which brought its valuation to more than $1bn.
Gorillas was founded in Berlin in 2020 by Kağan Sümer and Jörg Kattner, promising grocery deliveries in as little as 10 minutes.
It now operates more than 180 warehouses and has expanded to more than 55 cities in nine countries, including Amsterdam, London, Paris, Madrid, New York and Munich.
The company plans to use the latest funding for its next phase of development. This includes reinforcing its footprint in existing markets and investing in operations, technology and marketing.
“The size of today’s funding round by an extraordinary investment consortium underscores the tremendous market potential that lies ahead of us,” said Sümer, who is CEO of the start-up.
“With Delivery Hero, we have chosen a strong strategic support that is deeply rooted in the global delivery market, and is renowned for having unique experience in sustainably scaling a German company internationally.”
On-demand grocery delivery is a growing area in Europe that’s attracting investor attention.
The Information Commissioner’s Office is to intervene over concerns about the use of facial recognition technology on pupils queueing for lunch in school canteens in the UK.
Nine schools in North Ayrshire began taking payments for school lunches this week by scanning the faces of their pupils, according to a report in the Financial Times. More schools are expected to follow.
The ICO, an independent body set up to uphold information rights in the UK, said it would be contacting North Ayrshire council about the move and urged a “less intrusive” approach where possible.
An ICO spokesperson said organisations using facial recognition technology must comply with data protection law before, during and after its use, adding: “Data protection law provides additional protections for children, and organisations need to carefully consider the necessity and proportionality of collecting biometric data before they do so.
“Organisations should consider using a different approach if the same goal can be achieved in a less intrusive manner. We are aware of the introduction, and will be making inquiries with North Ayrshire council.”
The company supplying the technology claimed it was more Covid-secure than other systems, as it was cashless and contactless, and sped up the lunch queue, cutting the time spent on each transaction to five seconds.
Other types of biometric systems, principally fingerprint scanners, have been used in schools in the UK for years, but campaigners say the use of facial recognition technology is unnecessary.
Silkie Carlo, the director of Big Brother Watch, told the Guardian the campaign group had written to schools using facial recognition systems, setting out their concerns and urging them to stop immediately.
“No child should have to go through border-style identity checks just to get a school meal,” she said. “We are supposed to live in a democracy, not a security state.
“This is highly sensitive, personal data that children should be taught to protect, not to give away on a whim. This biometrics company has refused to disclose who else children’s personal information could be shared with and there are some red flags here for us.”
The technology is being installed in schools in the UK by a company called CRB Cunninghams. David Swanston, its managing director, told the FT: “It’s the fastest way of recognising someone at the till. In a secondary school you have around about a 25-minute period to serve potentially 1,000 pupils. So we need fast throughput at the point of sale.”
Live facial recognition, technology that scans crowds to identify faces, has been challenged by civil rights campaigners because of concerns about consent. CRB Cunninghams said the system being installed in UK schools was different – parents had to give explicit consent and cameras check against encrypted faceprint templates stored on school servers.
A spokesperson for North Ayrshire council said its catering system contracts were coming to a natural end, allowing the introduction of new IT “which makes our service more efficient and enhances the pupil experience using innovative technology”.
They added: “Given the ongoing risks associated with Covid-19, the council is keen to have contactless identification as this provides a safer environment for both pupils and staff. Facial recognition has been assessed as the optimal solution that will meet all our requirements.”
The council said 97% of children or their parents had given consent for the new system.
A Scottish government spokesperson said that local authorities, as data controllers, had a duty to comply with general data protection regulations and that schools must by law adhere to strict guidelines on how they collect, store, record and share personal data.
Hayley Dunn, a business leadership specialist at the Association of School and College Leaders, said: “There would need to be strict privacy and data protection controls on any companies offering this technology.
“Leaders would also have legitimate concerns about the potential for cyber ransomware attacks and the importance of storing information securely, which they would need reassurances around before implementing any new technology.”