David Eagleman, 50, is an American neuroscientist, bestselling author and presenter of the BBC series The Brain, as well as co-founder and chief executive officer of Neosensory, which develops devices for sensory substitution. His area of speciality is brain plasticity, and that is the subject of his new book, Livewired, which examines how experience refashions the brain, and shows that it is a much more adaptable organ than previously thought.
For the past half-century or more the brain has been spoken of in terms of a computer. What are the biggest flaws with that particular model? It’s a very seductive comparison. But in fact, what we’re looking at is three pounds of material in our skulls that is essentially a very alien kind of material to us. It doesn’t write down memories, the way we think of a computer doing it. And it is capable of figuring out its own culture and identity and making leaps into the unknown. I’m here in Silicon Valley. Everything we talk about is hardware and software. But what’s happening in the brain is what I call livewire, where you have 86bn neurons, each with 10,000 connections, and they are constantly reconfiguring every second of your life. Even by the time you get to the end of this paragraph, you’ll be a slightly different person than you were at the beginning.
In what way does the working of the brain resemble drug dealers in Albuquerque? It’s that the brain can accomplish remarkable things without any top-down control. If a child has half their brain removed in surgery, the functions of the brain will rewire themselves on to the remaining real estate. And so I use this example of drug dealers to point out that if suddenly in Albuquerque, where I happened to grow up, there was a terrific earthquake, and half the territory was lost, the drug dealers would rearrange themselves to control the remaining territory. It’s because each one has competition with his neighbours and they fight over whatever territory exists, as opposed to a top-down council meeting where the territory is distributed. And that’s really the way to understand the brain. It’s made up of billions of neurons, each of which is competing for its own territory.
You use this colonial image a lot in the book, a sense of the processes and struggles of evolution being fought out within the brain itself. That’s exactly right. And I think this is a point of view that’s not common in neuroscience. Usually, when we look in a neuroscience textbook, we say here are the areas of the brain and everything looks like it’s getting along just fine. It belongs exactly where it is. But the argument I make in the book is, the only reason it looks that way is because the springs are all wound tight. And the competition for each neuron – each cell in the brain to stay alive against its neighbours – is a constantly waged war. This is why when something changes in the brain, for example, if a person goes blind, or loses an arm or something, you see these massive rearrangements that happen very rapidly in the brain. It’s just as the French lost their territory in North America because the British were sending more people over.
One of the great mysteries of the brain is the purpose of dreams. And you propose a kind of defensive theory about how the brain responds to darkness. One of the big surprises of neuroscience was to understand how rapidly these takeovers can happen. If you blindfold somebody for an hour, you can start to see changes where touch and hearing will start taking over the visual parts of the brain. So what I realised is, because the planet rotates into darkness, the visual system alone is at a disadvantage, which is to say, you can still smell and hear and touch and taste in the dark, but you can’t see any more. I realised this puts the visual system in danger of getting taken over every night. And dreams are the brain’s way of defending that territory. About every 90 minutes a great deal of random activity is smashed into the visual system. And because that’s our visual system, we experience it as a dream, we experience it visually. Evolutionarily, this is our way of defending ourselves against visual system takeover when the planet moves into darkness.
Another mystery is consciousness. Do you think we are close to understanding what consciousness is and how it’s created? There’s a great deal of debate about how to define consciousness, but we are essentially talking about the thing that flickers to life when you wake up in the morning. But as far as understanding why it happens, I don’t know that we’re much closer than we’ve ever been. It’s different from other scientific conundrums in that what we’re asking is, how do you take physical pieces and parts and translate that into private, subjective experience, like the redness of red, or the pain of pain or the smell of cinnamon? And so not only do we not have a theory, but we don’t really know what such a theory would look like that would explain our experience in physical or mathematical terms.
You predict that in the future we’ll be able to glean the details of a person’s life from their brains. What would that mean in terms of personal privacy and liberty? Oh, yeah, it’s going to be a brave new world. Maybe in 100 years, maybe 500, but it’ll certainly happen. Because what we’re looking at is a physical system that gets changed and adjusted based on your experiences. What’s going on with the brain is the most complex system we’ve ever come across in our universe but fundamentally it’s physical pieces and parts and, as our computational capacities are becoming so extraordinary now, it’s just a countdown until we get there. Do we get to keep our inner thoughts private? Almost certainly we will. You can’t stick somebody in a scanner and try to ask them particular kinds of questions. But again, this will happen after our lifetime, so it’s something for the next generations to struggle with.
Do you think in the future that we’ll be able to communicate just by thinking? Communication is a multi-step process. And so in answering your questions, I have many, many thoughts. And I’m getting it down to something that I can say that will communicate clearly what I intend. But if you were to just read my thoughts and say, “OK, give me the answer,” it would be a jumble of half-sentences and words and some random thought, like, Oh, my coffee is spilling. It’s like you wouldn’t want to read somebody’s book that hasn’t been polished by them over many iterations, but instead is burped out of their brain.
What are your views on Elon Musk’s Neuralink enterprise, which is developing implantable brain-machine interfaces? There’s nothing new about it insofar as neuroscientists have been putting electrodes in people’s brains for at least 60 years now. The advance is in his technology, which is making the electrodes denser and also wireless, although even that part’s not new. I think it will be very useful in certain disease states, for example, epilepsy and depression, to be able to put electrodes directly in there and monitor and put activity in. But the mythology of Neuralink is that this is something we can all use to interface faster with our cellphones. I’d certainly like to text 50% faster, but am I going to get an open-head surgery? No, because there’s an expression in neurosurgery: when the air hits your brain, it’s never the same.
You didn’t start out academically in neuroscience. What led you there? I majored in British and American literature. And that was my first love. But I got hooked on neuroscience because I took a number of philosophy courses. I found that we’d constantly get stuck in some philosophical conundrum. We’d spin ourselves into a quagmire and not be able to get out. And I thought, Wow, if we could understand the perceptual machinery by which we view the world, maybe we’d have a shot at answering some of these questions and actually making progress. When I finally discovered neuroscience, I read every book in the college library on the brain – there weren’t that many at the time – and I just never looked back.
How can we maximise our brain power, and what do you do to switch off? There’s this myth that we only use 10% of our brain that, of course, is not true. We’re using 100% of our brain all the time. But the way information can be digested and fed to the brain can be very different. I think the next generation is going to be much smarter than we are. I have two small kids, and any time they want to know something, they ask Alexa or Google Home, and they get the answer right in the context of their curiosity. This is a big deal, because the brain is most flexible when it is curious about something and gets the answer. Regarding switching off, I never take any downtime and I don’t want to. I have a very clear sense of time pressure to do the next things. I hope I don’t die young, but I certainly act as though that is a possibility. One always has to be prepared to say goodbye, so I’m just trying to get everything done before that time.
There were complaints of ‘a large number of hurdles’ to unsubscribe from Amazon Prime such as complicated menus, skewed wording, confusing choices and warnings.
Amazon has committed to making it easier for users to cancel their Prime subscription to comply with EU rules.
The tech giant will now let consumers in the EU and EEA unsubscribe from Amazon Prime with just two clicks, using a prominent cancel button.
This came following a dialogue with the European Commission and national consumer protection authorities. Complaints had been issued to the Commission by the European Consumer Organisation, the Norwegian Consumer Council and the Transatlantic Consumer Dialogue.
These consumer authorities noted “a large number of hurdles” to unsubscribe from Amazon’s service, such as complicated navigation menus, skewed wording, confusing choices and repeated nudging.
Amazon made initial changes last year, labelling the cancel button more clearly and shortening the explanatory text. This text will now been reduced further so consumers don’t get distracted by warnings and deterred from cancelling.
“Consumers must be able to exercise their rights without any pressure from platforms,” said EU commissioner for justice Didier Reynders.
“Opting for an online subscription can be very handy for consumers as it is often a very straightforward process, but the reverse action of unsubscribing should be just as easy. One thing is clear: manipulative design or ‘dark patterns’ must be banned.”
Amazon has committed to implementing the new changes on all its EU websites and for all devices. The tech giant will be monitored by the European Commission and national authorities to ensure it complies with EU consumer law.
“Customer transparency and trust are top priorities for us,” an Amazon spokesperson said.
“By design we make it clear and simple for customers to both sign up for or cancel their Prime membership. We continually listen to feedback and look for ways to improve the customer experience, as we are doing here following constructive dialogue with the European Commission.”
Amazon has had a number of dealing with the European Commission over the years regarding its business practices. The tech giant was hit with a Statement of Objections in 2020 based on its use of marketplace seller data.
The country was ordered to recoup €250m in back taxes. However, Amazon won its appeal against this ruling last year, as the EU’s general court said the European Commission didn’t provide the “requisite legal standard” to prove Amazon received favour from tax authorities.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.
Why read books, in this day and age? “Haven’t we all secretly sort of come to an agreement, in the last year or two or three, that novels belonged to the age of newspapers and are going the way of newspapers, only faster?” wrote Jonathan Franzen, tongue firmly in cheek, in a 2010 essay. The comment feels trenchant a decade later, in an era marked by a saturation of streaming platforms, short-form video, podcasts and screen adaptations of said podcasts.
The proportion of non-readers in Australia has grown in recent years: results of the 2021 National Reading Survey found that 25% of people reported not reading a single book in the previous year – up from 8% in a 2017 survey.
Any bibliophile can easily rattle off a list of reasons for reading. Books enlighten and challenge us, they transport us to different worlds, they reflect essential truths about the human condition.
“People who read well and read a lot learn more, pick up more general knowledge … and can then be better critical consumers of what they read,” says Prof Pamela Snow, co-director of the Science of Language and Reading lab at La Trobe University.
So, within our busy lives, how do we better find the time for books? How can we get more out of the reading experience?
We commonly interact with texts in different modes. In skimming through an article, taking in a few lines – a headline and subheadings, for example – we might gain a general but shallow understanding of its meaning. We also scan texts for specific numbers, names, or ideas – a quantity in a recipe, say.
Then there’s deep reading, what the scholars Dr Maryanne Wolf and Dr Mirit Barzillai define as “the array of sophisticated processes that propel comprehension and that include inferential and deductive reasoning, analogical skills, critical analysis, reflection, and insight. The expert reader needs milliseconds to execute these processes; the young brain needs years to develop them.”
Reading on screens has turned us into adept text skimmers. An influential 2005 study that analysed how reading behaviour had changed over the previous decade – coinciding with the global rise of the internet – found that online reading was characterised by “more time spent on browsing and scanning, keyword spotting, [and] one-time reading … while less time is spent on in-depth reading, and concentrated reading”.
“Readers must engage in an active construction of meaning, in which they grapple with the text and apply their earlier knowledge as they question, analyse, and probe,” she and Barzillai have suggested. One technique for in-depth reading of narrative texts is RIDA: to Read, Imagine the scene, Describe it to yourself, and Add more mental detail by noting powerful imagery or salient passages.
Physical books, rather than devices like smartphones, tend to support more focused reading, says Baron, though she says the choice of medium is ultimately a matter of personal preference.
Screens themselves are not inherently detrimental to our ability to focus, says the head of the visual and cognitive neuroscience laboratory at the University of Melbourne, Prof Trichur Vidyasagar.
“People often have the belief, particularly concerned parents, that if you spend too much time on screen devices your concentration may get poorer. That’s not necessarily true,” he says. “If used correctly and not at the cost of other useful activities, they can greatly benefit learning.”
The key is the internet’s boundless potential for distraction. “When you use the screen, there are so many hyperlinks, so many sites, stories, and rabbit holes to go into,” Vidyasagar says. The temptation to multitask – “an illusory myth,” he says – can be hard to resist. “If you think you’re multitasking, what you’re actually doing is switching between two tasks at a rapid rate, and your performance in both goes down.”
“When you read a [physical] book it’s quite different – you can’t get distracted as easily.”
Research in university students has found that comprehension is generally higher for print reading. “There is something about reading digitally that seemingly increases the speed at which students move through the text and this processing time translates into reduced comprehension,” one study found. “The findings are especially true when you’re talking about longer materials,” Baron says, adding as a caveat that research tends to focus on academic rather than leisure reading.
Results seem to differ slightly for dedicated e-reader devices. One study, in which participants read a 28-page mystery story by Elizabeth George either in print or on a Kindle, found no differences in most standard comprehension measures. The print readers, however, were better at reconstructing the plot and chronological aspects of the story – potentially because “the physical substrate of paper provides material placeholders” for events within the story.
Rediscovering joy and meaning
Dr Judith Seaboyer, formerly a senior lecturer in literary studies at the University of Queensland, who retired last May, recently went through a fiction dry spell. “There’s so much good stuff to listen to [on the radio], so much good journalism out there to read, and I was finding that I wasn’t reading novels any more.”
“As somebody … who’s done a PhD in contemporary literary fiction, and taught it for over 20 years – you think I’d know [reading books] is worth doing.”
What broke Seaboyer out of her slump was reading new work by an author she loves – Ali Smith’s Companion Piece. Synthesising ideas and making comparisons across multiple texts is also a known strategy for deepening reading comprehension, so some might find it helpful to dig into multiple books by the same author.
Seaboyer’s advice is to read with curiosity and to carefully consider an author’s choices, which can lead to a deeper understanding of language, characters and plot. “Jot things down, annotate your book, write things in the margin,” she says. “Some publishers are putting out reading guides now – that’s often quite useful.”
Nabokov believed that “One cannot read a book: one can only reread it”. For him, revisiting books – like the process of regarding a painting – meant the mind first “takes in the whole picture and then can enjoy its details”.
“You [might] remember that you really loved reading Austen,” Seaboyer says. “It’s interesting to be thinking as you read … now that I’m older and wiser, am I seeing any of this any differently than I did when I was 18?”
“There are ways to be kind to yourself, to allow yourself the opportunity not to understand something the first time through, or to say … maybe there’s a different book I should read first,” Baron says. “It’s like reading James Joyce: if you want to start with Ulysses, good luck. If you start with A Portrait of the Artist as a Young Man, you’ll have a better shot at working your way in.”
If reading solely for pleasure, abandoning books that are not bringing enjoyment could, in fact, increase reading time. Of frequent readers surveyed in 2021 – those who consumed at least one book a month – 54% reported not finishing a book if they disliked it. As a result, they “move[d] on more quickly to the next book for greater enjoyment … and have fewer and shorter gaps between books”.
For those wanting to read more – for relaxation or self-improvement – Baron suggests committing to short but regular periods of reading, similar to time set aside for exercise or meditation.
The speed question
Some people are naturally fast readers – celebrated academic Harold Bloom claimed to be able to read 1,000 pages an hour in his prime. Most adults, according to 2019 analysis, read English nonfiction silently at a rate of between 175 and 300 words a minute, and fiction at a rate of 200 to 320.
While speed reading techniques or apps may seem alluring for the time poor, they’re unlikely to work without compromising understanding.
“Fast readers are not necessarily better at reading comprehension,” Vidyasagar says.
There are no shortcuts to reading faster. Becoming a better reader requires persistence and “dealing with the frustration at not seeing overnight results”, Snow says. “It’s like any skill – learning a musical instrument, learning to drive a car.”
A 2016 review of the science of reading found that reading can be improved in the same way all other skills are developed: through practice. “The way to maintain high comprehension and get through text faster is to practise reading and to become a more skilled language user.”
“If two goals of reading might be to learn for the long haul, and to think – that may be part of enjoyment, that may be part of learning – then what’s the hurry?” Baron says. “Why are we feeling like the White Rabbit?”
For Seaboyer, reading a good book is akin to a meditative experience . The “wonderful, immersive process that is deep reading” reliably brings her pleasure. “Something else is picking you up, and moving your mind and body and soul into a different space so you can think about the world differently.”
Comment Liquid and immersion cooling have undergone something of a renaissance in the datacenter in recent years as components have grown ever hotter.
This trend has only accelerated over the past few months as we’ve seen a fervor of innovation and development around everything from liquid-cooled servers and components for vendors that believe the only way to cool these systems long term is to drench them in a vat of refrigerants.
Liquid and immersion cooling are by no means new technologies. They’ve had a storied history in the high-performance computing space, in systems like HPE’s Apollo, Cray, and Lenovo’s Neptune to name just a handful.
A major factor driving the adoption of this tech in traditional datacenters is a combination of more powerful chips and a general desire to cut operating costs by curbing energy consumption.
One of the challenges, however, is many of these systems employ radically different form factors than are typical in air-cooled datacenters. Some systems only require modest changes to the existing rack infrastructure, while others ditch that convention entirely in favor of massive tubs into which servers are vertically slotted.
The ways these technologies are being implemented is a mixed bag to say the least.
Immersion cooling meets rack mount
This challenge was on full display this week at HPE Discover, where the IT goliath announced a collaboration with Intel and Iceotope to bring immersion-cooling tech to HPE’s enterprise-focused Proliant server line.
The systems can now be provisioned with Iceotope’s Ku:l immersion and liquid-cooling technology, via HPE’s channel partners with support provided by distributor Avnet Integrated. Iceotope’s designs meld elements of immersion cooling and closed-loop liquid cooling to enable this technology to be deployed in rack environments with minimal changes to the existing infrastructure.
Ice’s chassis-level immersion-cooling platform effectively uses the server’s case as a reservoir and then pumps coolant throughout to hotspots like the CPU, GPU, or memory. The company also offers a 3U conversion kit for adapting air-cooled servers to liquid cooling.
Both designs utilize a liquid-to-liquid heat exchanger toward the back of the chassis, where deionized water is pumped in and heat is removed from the system using an external dry cooler.
This is a stark departure from the approach used by rival immersion-cooling vendors, such as LiquidStack or Submer, which favor submerging multiple systems in a tub full of coolant — commonly a two-phase refrigerant or specialized oil.
While this approach has shown promise, and has even been deployed in Microsoft’s Azure datacenters, the unique form factors may require special consideration from building operators. Weight distribution is among operators’ primary concerns, Dell’Oro analyst Lucas Beran told The Register in an earlier interview.
Standardized reference designs in the works
The lack of a standardized form factor for deploying and implementing these technologies is one of several challenges Intel hopes to address with its $700 million Oregon liquid and immersion cooling lab.
Announced in late May, the 200,000-square-foot facility, located about 20 miles west of Portland at its Hillsboro campus in the US, will qualify, test, and demo its expansive datacenter portfolio using a variety of cooling tech. The chipmaker is also said to be working on an open reference design for an immersion-cooling system that’s being developed by Intel Taiwan.
Intel plans to bring other Taiwanese manufacturers into the fold before rolling out the reference design globally. Whether the x86 giant will be able to bring any consistency to the way immersion cooling will be deployed in datacenters going forward remains to be seen, however.
Even if Intel’s reference design never pans out, there are still other initiatives pursuing similar goals, including the Open Compute Project’s advanced cooling solutions sub project, launched in 2018.
It aims to establish an ecosystem of servers, storage, and networking gear built around common standards for direct contact, immersion, and other cooling tech.
In the meantime, the industry will carry on chilling the best ways it can. ®