Epic Games and Apple faced off in an Oakland, California, courtroom on Monday to resolve the gaming giant’s antitrust claim that Apple’s App Store represents an illegal monopoly.
Epic last year sued Apple after being denied the ability to sell digital goods for its Fortnite game using its own payment service rather than Apple’s In-App Payment API. If Epic prevails and Apple’s inevitable appeals get rebuffed, the power of digital platform owners to dictate the terms of market participation will be significantly diminished, in America at least.
On Friday, Alex Russell, a software engineer at Google – and the big G is also being sued by Epic for Play Store restrictions – personally published a rebuttal to one of the central tenets of Apple’s defense – that developers can compete with iOS apps by making web apps.
Apple has argued as much to the Australian Competition and Consumer Commission [PDF], starting that Progressive Web Apps (PWAs) represent a viable distribution alternative to the App Store. Apple CEO Tim Cook made a similar claim last year in Congressional testimony. He said that Apple is not the sole decision maker for web apps, suggesting the web offers a viable alternative distribution channel to the iOS App Store.
And Apple put forth that same argument in its opening materials for Epic v. Apple, with a slide that declares, “Epic’s Theory is Based on a False Premise.”
It depicts The Financial Times’ web app and native iOS app side by side, the almost identical designs suggesting the two are interchangeable.
Apple’s slide defending its position … Click to enlarge
In fact, web apps and native apps are distinctly different in their technical capabilities, and Russell contends Apple’s glacial pace of integrating modern web APIs into Safari and its WebKit rendering engine have left web apps, on iOS devices at least, unable to compete with native iOS apps.
“Apple’s iOS browser (Safari) and engine (WebKit) are uniquely under-powered,” he writes. “Consistent delays in the delivery of important features ensure the web can never be a credible alternative to its proprietary tools and App Store.”
Certainly among web browsers there’s limited room for competition and differentiation on iOS – Apple requires all mobile web browsers on iDevices to use its WebKit rendering engine. This makes the iOS versions of Brave, Chrome, and Edge (Chromium-based, with the Blink rendering engine), and Firefox (based on the Gecko rendering engine) essentially clones of Safari under the hood.
But where web browsers face a level playing field upon which no competition is allowed, web apps risk being tripped up by Apple’s indifferent groundskeeping while their native app counterparts race in paved lanes.
For Russell, performance isn’t really an issue. He concedes that all modern browsers are fast because there’s not that much more speed to be eked out after two decades of web tech rivalry.
Rather, he points to Safari’s lack of compatibility with web standards and claims it’s holding the entire web ecosystem back. Not only does Safari fail when it comes to compatibility with numerous web standards – illustrated by this Web Platform Test graph– but Russel contends Safari’s implementation of these features is often wrong.
“In almost every area, Apple’s low-quality implementation of features WebKit already supports requires workarounds,” he writes. “Developers would not need to find and fix these issues in Firefox (Gecko) or Chrome/Edge/Brave/Samsung Internet (Blink). This adds to the expense of developing for iOS.”
Apple’s web gap can also be measured in terms of APIs. By Russell’s count, Safari has been failing further and further behind in implementing web APIs, which make specific technical features available to developers. Safari is now something like 1000 APIs behind Chrome, double the gap measured in 2016, and 300 or so behind Firefox.
Russell allows that in some instances, Safari has outpaced Chrome, like implementing the Storage Access API as a privacy measure. However, he takes the opportunity to skewer Apple for botching the job, noting that its initial implementation created a worse tracking vector before it was repaired.
Glossing over the privacy improvements driven by Apple (and Brave and Mozilla), he enumerates various other API where Safari’s lack of support has hindered web apps. Among them are: getUserMedia(), WebRTC, Gamepad API, Audio Worklets, IndexedDB, Pointer Lock, Media Recorder, Pointer Events, Service Workers, WebM and VP8/VP9, CSS Typed Object Model, CSS Containment, to name a few.
“These omissions mean web developers cannot compete with their native app counterparts on iOS in critical categories like gaming, shopping, and creative tools,” Russell argues.
There are, Russell insists, multiple crucial features available on every other operating system that Apple doesn’t support in iOS. These include things like: Push Notification, PWA Install Prompts, Media Session API, Navigation Preloads, and maybe 20 other technologies that have the potential to enable new classes of applications on the web and new businesses.
Russell concludes that Safari/WebKit lags the competition in terms of compatibility and features, “resulting in a large and persistent gap with Apple’s native platform.”
Epic has advanced a version of this argument in its case against Apple by citing a deposition from Scott Forstall, former Apple SVP of iOS Software, in which Forstall asserts that native apps provide a better experience than web apps.
An in his opening day trial testimony, Epic Games CEO Tim Sweeney made a similar point. “Web apps are not nearly powerful enough to run a modern 3D experience such as Fortnite,” he said.
If District Court Judge Yvonne Gonzalez Rogers – who will decide the case instead of a jury – finds merit in this claim and concludes the App Store is an unlawful monopoly, Apple’s iOS walled garden and others like it could crumble.
The Register asked Apple for comment and also inquired to /dev/null. Both responded exactly the same way. ®
Google’s decision follows concerns that law enforcement could use personal data from certain apps against people who have sought abortions illegally.
Tech giant Google has said it will soon auto-delete the data of users’ visits to abortion clinics and other medical sites from their location history.
This followed the US Supreme Court’s recent decision to overturn Roe v Wade, eliminating the constitutional right to an abortion in the country.
Other medical facilities that Google mentioned in its planned location changes include counselling centres, domestic violence shelters, fertility centres, addiction treatment facilities, weight loss clinics and cosmetic surgery clinics.
The tech giant also said location history is off by default and that there are tools such as auto-delete so users can easily get rid of parts or all of their location data.
Google said the location data changes will take effect “in the coming weeks”. The tech giant also shared planned data changes around its fitness apps to protect the privacy of users.
“Fitbit users who have chosen to track their menstrual cycles in the app can currently delete menstruation logs one at a time, and we will be rolling out updates that let users delete multiple logs at once,” said Google senior VP of core systems and experiences Jen Fitzpatrick in a blog post.
Fitzpatrick said the tech giant considers the “privacy and security expectations” of people using its products and that it notifies users when it complies with legal demands for information.
“We remain committed to protecting our users against improper government demands for data, and we will continue to oppose demands that are overly broad or otherwise legally objectionable,” Fitzpatrick said.
Following the decision to overturn Roe v Wade, there have been concerns that law enforcement could use personal data from certain apps against people who have sought abortions illegally.
One type of app where this has been a concern has been period tracking apps. The Stardust app saw a recent surge in popularity in after it claimed to implement end-to-end encryption.
Last week, I missed a real-life meeting because I hadn’t set a reminder on my smartphone, leaving someone I’d never met before alone in a café. But on the same day, I remembered the name of the actor who played Will Smith’s aunt in The Fresh Prince of Bel-Air in 1991 (Janet Hubert). Memory is weird, unpredictable and, neuroscientifically, not yet entirely understood. When memory lapses like mine happen (which they do, a lot), it feels both easy and logical to blame the technology we’ve so recently adopted. Does having more memory in our pockets mean there’s less in our heads? Am I losing my ability to remember things – from appointments to what I was about to do next – because I expect my phone to do it for me? Before smartphones, our heads would have held a cache of phone numbers and our memories would contain a cognitive map, built up over time, which would allow us to navigate – for smartphone users, that is no longer true.
Our brains and our smartphones form a complex web of interactions: the smartphonification of life has been rising since the mid 2000s, but was accelerated by the pandemic, as was internet use in general. Prolonged periods of stress, isolation and exhaustion – common themes since March 2020 – are well known for their impact on memory. Of those surveyed by memory researcher Catherine Loveday in 2021, 80% felt that their memories were worse than before the pandemic. We are – still – shattered, not just by Covid-19, but also by the miserable national and global news cycle. Many of us self-soothe with distractions like social media. Meanwhile, endless scrolling can, at times, create its own distress, and phone notifications and self interrupting to check for them, also seem to affect what, how and if we remember.
So what happens when we outsource part of our memory to an external device? Does it enable us to squeeze more and more out of life, because we aren’t as reliant on our fallible brains to cue things up for us? Are we so reliant on smartphones that they will ultimately change how our memories work (sometimes called digital amnesia)? Or do we just occasionally miss stuff when we don’t remember the reminders?
Neuroscientists are divided. Chris Bird is professor of cognitive neuroscience in the School of Psychology at the University of Sussex and runs research by the Episodic Memory Group. “We have always offloaded things into external devices, like writing down notes, and that’s enabled us to have more complex lives,” he says. “I don’t have a problem with using external devices to augment our thought processes or memory processes. We’re doing it more, but that frees up time to concentrate, focus on and remember other things.” He thinks that the kind of things we use our phones to remember are, for most human brains, difficult to remember. “I take a photo of my parking ticket so I know when it runs out, because it’s an arbitrary thing to remember. Our brains aren’t evolved to remember highly specific, one-off things. Before we had devices, you would have to make a quite an effort to remember the time you needed to be back at your car.”
Professor Oliver Hardt, who studies the neurobiology of memory and forgetting at McGill University in Montreal, is much more cautious. “Once you stop using your memory it will get worse, which makes you use your devices even more,” he says. “We use them for everything. If you go to a website for a recipe, you press a button and it sends the ingredient list to your smartphone. It’s very convenient, but convenience has a price. It’s good for you to do certain things in your head.”
Hardt is not keen on our reliance on GPS. “We can predict that prolonged use of GPS likely will reduce grey matter density in the hippocampus. Reduced grey matter density in this brain area goes along with a variety of symptoms, such as increased risk for depression and other psychopathologies, but also certain forms of dementia. GPS-based navigational systems don’t require you to form a complex geographic map. Instead, they just tell you orientations, like ‘Turn left at next light.’ These are very simple behavioural responses (here: turn left) at a certain stimulus (here: traffic light). These kinds of spatial behaviours do not engage the hippocampus very much, unlike those spatial strategies that require the knowledge of a geographic map, in which you can locate any point, coming from any direction and which requires [cognitively] complex computations. When exploring the spatial capacities of people who have been using GPS for a very long time, they show impairments in spatial memory abilities that require the hippocampus. Map reading is hard and that’s why we give it away to devices so easily. But hard things are good for you, because they engage cognitive processes and brain structures that have other effects on your general cognitive functioning.”
Hardt doesn’t have data yet, but believes, “the cost of this might be an enormous increase in dementia. The less you use that mind of yours, the less you use the systems that are responsible for complicated things like episodic memories, or cognitive flexibility, the more likely it is to develop dementia. There are studies showing that, for example, it is really hard to get dementia when you are a university professor, and the reason is not that these people are smarter – it’s that until old age, they are habitually engaged in tasks that are very mentally demanding.” (Other scientists disagree – Daniel Schacter, a Harvard psychologist who wrote the seminal Seven Sins Of Memory: How The Mind Forgets and Remembers, thinks effects from things like GPS are “task specific”, only.)
While smartphones can obviously open up whole new vistas of knowledge, they can also drag us away from the present moment, like it’s a beautiful day, unexperienced because you’re head down, WhatsApping a meal or a conversation. When we’re not attending to an experience, we are less likely to recall it properly, and fewer recalled experiences could even limit our capacity to have new ideas and being creative. As the renowned neuroscientist and memory researcher Wendy Suzuki recently put it on the Huberman Lab neuroscience podcast, “If we can’t remember what we’ve done, the information we’ve learned and the events of our lives, it changes us… [The part of the brain which remembers] really defines our personal histories. It defines who we are.”
Catherine Price, science writer and author of How to Break Up With Your Phone, concurs. “What we pay attention to in the moment adds up to our life,” she says. “Our brains cannot multitask. We think we can. But any moment where multitasking seems successful, it’s because one of those tasks was not cognitively demanding, like you can fold laundry and listen to the radio. If you’re paying attention to your phone, you’re not paying attention to anything else. That might seem like a throwaway observation, but it’s actually deeply profound. Because you will only remember the things you pay attention to. If you’re not paying attention, you’re literally not going to have a memory of it to remember.”
The Cambridge neuroscientist Barbara Sahakian has evidence of this, too. “In an experiment in 2010, three different groups had to complete a reading task,” she says. “One group got instant messaging before it started, one got instant messaging during the task, and one got no instant messaging, and then there was a comprehension test. What they found was that the people getting instant messages couldn’t remember what they just read.”
Price is much more worried about what being perpetually distracted by our phones – termed “continual partial attention” by the tech expert Linda Stone – does to our memories than using their simpler functions. “I’m not getting distracted by my address book,” she says. And she doesn’t believe smartphones free us up to do more. “Let’s be real with ourselves: how many of us are using the time afforded us by our banking app to write poetry? We just passively consume crap on Instagram.” Price is from Philadelphia. “What would have happened if Benjamin Franklin had had Twitter? Would he have been on Twitter all the time? Would he have made his inventions and breakthroughs?
“I became really interested in whether the constant distractions caused by our devices might be impacting our ability to actually not just accumulate memories to begin with, but transfer them into long-term storage in a way that might impede our ability to think deep and interesting thoughts,” she says. “One of the things that impedes our brain’s ability to transfer memories from short- to long-term storage is distraction. If you get distracted in the middle of it” – by a notification, or by the overwhelming urge to pick up your phone – “you’re not actually going to have the physical changes take place that are required to store that memory.”
It’s impossible to know for sure, because no one measured our level of intellectual creativity before smartphones took off, but Price thinks smartphone over-use could be harming our ability to be insightful. “An insight is being able to connect two disparate things in your mind. But in order to have an insight and be creative, you have to have a lot of raw material in your brain, like you couldn’t cook a recipe if you didn’t have any ingredients: you can’t have an insight if you don’t have the material in your brain, which really is long term memories.” (Her theory was backed by the 92-year-old Nobel prize-winning neuroscientist and biochemist Eric Kandel, who has studied how distraction affects memory – Price bumped into him on a train and grilled him about her idea. “I’ve got a selfie of me with a giant grin and Eric looking a bit confused.”) Psychologist professor Larry Rosen, co-author (with neuroscientist Adam Gazzaley) of The Distracted Mind: Ancient Brains in a High-Tech World, also agrees: “Constant distractions make it difficult to encode information in memory.”
Smartphones are, of course, made to hijack our attention. “The apps that make money by taking our attention are designed to interrupt us,” says Price. “I think of notifications as interruptions because that’s what they’re doing.”
For Oliver Hardt, phones exploit our biology. “A human is a very vulnerable animal and the only reason we are not extinct is that we have a superior brain: to avoid predation and find food, we have had to be really good at being attentive to our environment. Our attention can shift rapidly around and when it does, everything else that was being attended to stops, which is why we can’t multitask. When we focus on something, it’s a survival mechanism: you’re in the savannah or the jungle and you hear a branch cracking, you give your total attention to that – which is useful, it causes a short stress reaction, a slight arousal, and activates the sympathetic nervous system. It optimises your cognitive abilities and sets the body up for fighting or flighting.” But it’s much less useful now. “Now, 30,000 years later, we’re here with that exact brain” and every phone notification we hear is a twig snapping in the forest, “simulating what was important to what we were: a frightened little animal.”
Smartphone use can even change the brain, according to the ongoing ABCD study which is tracking over 10,000 American children through to adulthood. “It started by examining 10-year-olds both with paper and pencil measures and an MRI, and one of their most interesting early results was that there was a relationship between tech use and cortical thinning,” says Larry Rosen, who studies social media, technology and the brain. “Young children who use more tech had a thinner cortex, which is supposed to happen at an older age.” Cortical thinning is a normal part of growing up and then ageing, and in much later life can be associated with degenerative diseases such as Parkinson’s and Alzheimer’s, as well as migraines.
Obviously, the smartphone genie is out of the bottle and has run over the hills and far away. We need our smartphones to access offices, attend events, pay for travel and to function as tickets, passes and credit cards, as well as for emails, calls and messages. It’s very hard not to have one. If we’re worried about what they – or the apps on them – might be doing to our memories, what should we do?
Rosen discusses a number of tactics in his book. “My favourites are tech breaks,” he says, “where you start by doing whatever on your devices for one minute and then set an alarm for 15 minutes time. Silence your phone and place it upside down, but within your view as a stimulus to tell your brain that you will have another one-minute tech break after the 15-minute alarm. Continue until you adapt to 15 minutes focus time and then increase to 20. If you can get to 60 minutes of focus time with short tech breaks before and after, that’s a success.”
“If you think your memory and focus have got worse and you’re blaming things like your age, your job, or your kids, that might be true, but it’s also very likely due to the way you’re interacting with your devices,” says Price, who founded Screen/Life Balance to help people manage their phone use. As a science writer, she’s “very much into randomly controlled trials, but with phones, it’s actually more of a qualitative question about personally how it’s impacting you. And it’s really easy to do your own experiment and see if it makes a difference. It’s great to have scientific evidence. But we can also intuitively know: if you practice keeping your phone away more and you notice that you feel calmer and you’re remembering more, then you’ve answered your own question.”
China’s efforts to end its reliance on Microsoft Windows got a boost with the launch of the openKylin project.
The initiative aims to accelerate development of the country’s home-grown Kylin Linux distro by opening the project up to a broader community of developers, colleges, and universities to contribute code.
Launched in 2001, Kylin was based on a FreeBSD kernel and was intended for use in government and military offices, where Chinese authorities have repeatedly attempted to eliminate foreign operating systems.
In 2010, the operating system made the switch to the Linux kernel, and in 2014 an Ubuntu-based version of the OS was introduced after Canonical reached an agreement with Chinese authorities to develop the software.
The openKylin project appears to be the latest phase of that project, and is focused on version planning, platform development, and establishing a community charter. To date, the project has garnered support from nearly two dozen Chinese firms and institutions, including China’s Advanced Operating System Innovation Center.
These industry partners will contribute to several special interest groups to improve various aspects of the operating system over time. Examples include optimizations for the latest generation of Intel and AMD processors, where available; support for emerging RISC-V CPUs; development of an x86-to-RISC-V translation layer; and improvements to the Ubuntu Kylin User Interface (UKUI) window manager for tablet and convertible devices.
China’s love-hate relationship with Microsoft
China’s efforts to rid itself of Redmond are by no means new. As far back as 2000, Chinese authorities ordered government offices to remove Windows in favor of Red Flag Linux.
However, in the case of Red Flag Linux, those efforts ultimately went nowhere after the project failed to catch on. The org was ultimately dissolved, and the team terminated in 2014. Despite its collapse, the project appears to have been rebooted, with a release slated to launch later this year.
This is a story that would repeat on a regular cadence, fueled by periodic spats between Uncle Sam and software vendors.
It’s safe to say the Chinese government has something of a love-hate relationship with Redmond. In 2013, Chinese authorities urged Microsoft to extend support for Windows XP, on which the country still relied heavily.
However, a year later, the Chinese government banned Windows 8 in much of the public sector, just months after Microsoft ended support for Windows XP.
Today, Microsoft controls roughly 85 percent of the desktop operating system market as of June 2022, according to Statcounter.
It doesn’t appear those efforts bought Microsoft’s American partners much in terms of goodwill, with Chinese authorities directing government agencies to throw out all foreign-made personal computers this spring. ®