Connect with us

Technology

David Eagleman: ‘The working of the brain resembles drug dealers in Albuquerque’ | Neuroscience

Voice Of EU

Published

on

David Eagleman, 50, is an American neuroscientist, bestselling author and presenter of the BBC series The Brain, as well as co-founder and chief executive officer of Neosensory, which develops devices for sensory substitution. His area of speciality is brain plasticity, and that is the subject of his new book, Livewired, which examines how experience refashions the brain, and shows that it is a much more adaptable organ than previously thought.

For the past half-century or more the brain has been spoken of in terms of a computer. What are the biggest flaws with that particular model?
It’s a very seductive comparison. But in fact, what we’re looking at is three pounds of material in our skulls that is essentially a very alien kind of material to us. It doesn’t write down memories, the way we think of a computer doing it. And it is capable of figuring out its own culture and identity and making leaps into the unknown. I’m here in Silicon Valley. Everything we talk about is hardware and software. But what’s happening in the brain is what I call livewire, where you have 86bn neurons, each with 10,000 connections, and they are constantly reconfiguring every second of your life. Even by the time you get to the end of this paragraph, you’ll be a slightly different person than you were at the beginning.

In what way does the working of the brain resemble drug dealers in Albuquerque?
It’s that the brain can accomplish remarkable things without any top-down control. If a child has half their brain removed in surgery, the functions of the brain will rewire themselves on to the remaining real estate. And so I use this example of drug dealers to point out that if suddenly in Albuquerque, where I happened to grow up, there was a terrific earthquake, and half the territory was lost, the drug dealers would rearrange themselves to control the remaining territory. It’s because each one has competition with his neighbours and they fight over whatever territory exists, as opposed to a top-down council meeting where the territory is distributed. And that’s really the way to understand the brain. It’s made up of billions of neurons, each of which is competing for its own territory.

You use this colonial image a lot in the book, a sense of the processes and struggles of evolution being fought out within the brain itself.
That’s exactly right. And I think this is a point of view that’s not common in neuroscience. Usually, when we look in a neuroscience textbook, we say here are the areas of the brain and everything looks like it’s getting along just fine. It belongs exactly where it is. But the argument I make in the book is, the only reason it looks that way is because the springs are all wound tight. And the competition for each neuron – each cell in the brain to stay alive against its neighbours – is a constantly waged war. This is why when something changes in the brain, for example, if a person goes blind, or loses an arm or something, you see these massive rearrangements that happen very rapidly in the brain. It’s just as the French lost their territory in North America because the British were sending more people over.

brain waves in rem sleep
Brain waves during REM sleep. Photograph: Deco/Alamy

One of the great mysteries of the brain is the purpose of dreams. And you propose a kind of defensive theory about how the brain responds to darkness.
One of the big surprises of neuroscience was to understand how rapidly these takeovers can happen. If you blindfold somebody for an hour, you can start to see changes where touch and hearing will start taking over the visual parts of the brain. So what I realised is, because the planet rotates into darkness, the visual system alone is at a disadvantage, which is to say, you can still smell and hear and touch and taste in the dark, but you can’t see any more. I realised this puts the visual system in danger of getting taken over every night. And dreams are the brain’s way of defending that territory. About every 90 minutes a great deal of random activity is smashed into the visual system. And because that’s our visual system, we experience it as a dream, we experience it visually. Evolutionarily, this is our way of defending ourselves against visual system takeover when the planet moves into darkness.

Another mystery is consciousness. Do you think we are close to understanding what consciousness is and how it’s created?
There’s a great deal of debate about how to define consciousness, but we are essentially talking about the thing that flickers to life when you wake up in the morning. But as far as understanding why it happens, I don’t know that we’re much closer than we’ve ever been. It’s different from other scientific conundrums in that what we’re asking is, how do you take physical pieces and parts and translate that into private, subjective experience, like the redness of red, or the pain of pain or the smell of cinnamon? And so not only do we not have a theory, but we don’t really know what such a theory would look like that would explain our experience in physical or mathematical terms.

You predict that in the future we’ll be able to glean the details of a person’s life from their brains. What would that mean in terms of personal privacy and liberty?
Oh, yeah, it’s going to be a brave new world. Maybe in 100 years, maybe 500, but it’ll certainly happen. Because what we’re looking at is a physical system that gets changed and adjusted based on your experiences. What’s going on with the brain is the most complex system we’ve ever come across in our universe but fundamentally it’s physical pieces and parts and, as our computational capacities are becoming so extraordinary now, it’s just a countdown until we get there. Do we get to keep our inner thoughts private? Almost certainly we will. You can’t stick somebody in a scanner and try to ask them particular kinds of questions. But again, this will happen after our lifetime, so it’s something for the next generations to struggle with.

Do you think in the future that we’ll be able to communicate just by thinking?
Communication is a multi-step process. And so in answering your questions, I have many, many thoughts. And I’m getting it down to something that I can say that will communicate clearly what I intend. But if you were to just read my thoughts and say, “OK, give me the answer,” it would be a jumble of half-sentences and words and some random thought, like, Oh, my coffee is spilling. It’s like you wouldn’t want to read somebody’s book that hasn’t been polished by them over many iterations, but instead is burped out of their brain.

elon musk with the surgical robot from his august 2020 neuralink presentation
Elon Musk with the surgical robot from his August 2020 Neuralink presentation. Photograph: Neuralink/AFP/Getty Images

What are your views on Elon Musk’s Neuralink enterprise, which is developing implantable brain-machine interfaces?
There’s nothing new about it insofar as neuroscientists have been putting electrodes in people’s brains for at least 60 years now. The advance is in his technology, which is making the electrodes denser and also wireless, although even that part’s not new. I think it will be very useful in certain disease states, for example, epilepsy and depression, to be able to put electrodes directly in there and monitor and put activity in. But the mythology of Neuralink is that this is something we can all use to interface faster with our cellphones. I’d certainly like to text 50% faster, but am I going to get an open-head surgery? No, because there’s an expression in neurosurgery: when the air hits your brain, it’s never the same.

You didn’t start out academically in neuroscience. What led you there?
I majored in British and American literature. And that was my first love. But I got hooked on neuroscience because I took a number of philosophy courses. I found that we’d constantly get stuck in some philosophical conundrum. We’d spin ourselves into a quagmire and not be able to get out. And I thought, Wow, if we could understand the perceptual machinery by which we view the world, maybe we’d have a shot at answering some of these questions and actually making progress. When I finally discovered neuroscience, I read every book in the college library on the brain – there weren’t that many at the time – and I just never looked back.

How can we maximise our brain power, and what do you do to switch off?
There’s this myth that we only use 10% of our brain that, of course, is not true. We’re using 100% of our brain all the time. But the way information can be digested and fed to the brain can be very different. I think the next generation is going to be much smarter than we are. I have two small kids, and any time they want to know something, they ask Alexa or Google Home, and they get the answer right in the context of their curiosity. This is a big deal, because the brain is most flexible when it is curious about something and gets the answer. Regarding switching off, I never take any downtime and I don’t want to. I have a very clear sense of time pressure to do the next things. I hope I don’t die young, but I certainly act as though that is a possibility. One always has to be prepared to say goodbye, so I’m just trying to get everything done before that time.

Livewired by David Eagleman is published by Canongate (£9.99). To support the Guardian order your copy at guardianbookshop.com. Delivery charges may apply

Source link

Technology

How scientists in Ireland are using technology to predict the climate

Voice Of EU

Published

on

Scientists at ICHEC have used supercomputing to predict Ireland’s weather patterns for the rest of the century.

In August, the Intergovernmental Panel on Climate Change (IPCC) spelled out the intensity of the climate crisis affecting every region of the world because of human activity, and Ireland is no exception.

Scientists at the Irish Centre for High-End Computing (ICHEC), based at NUI Galway, have been using advanced technology to create climate models and simulations that indicate the impact of the climate crisis on Ireland by mid-century.

Their work has raised some concerning predictions for Ireland’s weather patterns in the coming decades, including more heatwaves, less snow, and increasingly unpredictable rainfall patterns – even by Irish standards.

Temperatures are set to increase by between one and 1.6 degrees Celsius relative to levels experienced between 1991 to 2000, with the east seeing the sharpest rise. Heatwaves, especially in the south-east of the country, are expected to become more frequent.

The simulations also found that the number of days Ireland experiences frost and ice will be slashed by half, as will the amount of snow that falls in winter. Rainfall will be more variable with longer dry and wet periods, and surface winds will become weaker.

‘Dramatic changes’

While the report suggests that a heating climate may be good for farming in Ireland – a significant contributor to the economy – it will also be accompanied by the rise of pests that can have potentially devastating effects on agriculture.

Reduced wind strength and unpredictable weather will have an impact on Ireland’s growing renewable energy infrastructure, which relies heavily on specific climate conditions to reach targets.

“A mean warming of two or three degrees Celsius does not seem like much, given that temperatures can vary by a lot more than that just from day to day,” said ICHEC climate scientists Dr Paul Nolan and Dr Enda O’Brien.

“However, even that amount of warming is likely to lead to widespread and even dramatic changes in ice cover – especially in the Arctic – to sea levels, and in the natural world of plants and animals.”

Ireland’s contribution

With machine learning and supercomputing, scientists are able to use historical climate data and observations to improve predictions of Earth’s future climate – and the impacts of the climate crisis.

Ireland is part of a consortium of several northern European countries that contribute to the IPCC reports by running global climate models that feed into the report’s assessment.

As part of the consortium, Nolan has conducted many centuries worth of global climate simulations using the EC-Earth climate model, which represents the most relevant physical processes that operate in the atmosphere, oceans, land surface and sea ice.

The simulations range from historical data – so the model can be compared to real climate records – to the end of the 21st century, with the aim of providing a comprehensive picture of climate trends and what the future could hold. The ICHEC research is funded and supported by the Environmental Protection Agency, Met Éireann and the Marine Institute in Galway.

“The level of detail and consistency achieved gives confidence in these projections and allows an ever more persuasive evidence-based consensus to emerge that humans are forcing rapid climate change in well-understood ways,” Nolan and O’Brien wrote in the Irish Times this week.

“How to respond to that consensus now is a matter primarily for governments, since they can have the most impact, as well as for individuals.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Continue Reading

Technology

Apple’s plan to scan images will allow governments into smartphones | John Naughton

Voice Of EU

Published

on

For centuries, cryptography was the exclusive preserve of the state. Then, in 1976, Whitfield Diffie and Martin Hellman came up with a practical method for establishing a shared secret key over an authenticated (but not confidential) communications channel without using a prior shared secret. The following year, three MIT scholars – Ron Rivest, Adi Shamir and Leonard Adleman – came up with the RSA algorithm (named after their initials) for implementing it. It was the beginning of public-key cryptography – at least in the public domain.

From the very beginning, state authorities were not amused by this development. They were even less amused when in 1991 Phil Zimmermann created Pretty Good Privacy (PGP) software for signing, encrypting and decrypting texts, emails, files and other things. PGP raised the spectre of ordinary citizens – or at any rate the more geeky of them – being able to wrap their electronic communications in an envelope that not even the most powerful state could open. In fact, the US government was so enraged by Zimmermann’s work that it defined PGP as a munition, which meant that it was a crime to export it to Warsaw Pact countries. (The cold war was still relatively hot then.)

In the four decades since then, there’s been a conflict between the desire of citizens to have communications that are unreadable by state and other agencies and the desire of those agencies to be able to read them. The aftermath of 9/11, which gave states carte blanche to snoop on everything people did online, and the explosion in online communication via the internet and (since 2007) smartphones, has intensified the conflict. During the Clinton years, US authorities tried (and failed) to ensure that all electronic devices should have a secret backdoor, while the Snowden revelations in 2013 put pressure on internet companies to offer end-to-end encryption for their users’ communications that would make them unreadable by either security services or the tech companies themselves. The result was a kind of standoff: between tech companies facilitating unreadable communications and law enforcement and security agencies unable to access evidence to which they had a legitimate entitlement.

In August, Apple opened a chink in the industry’s armour, announcing that it would be adding new features to its iOS operating system that were designed to combat child sexual exploitation and the distribution of abuse imagery. The most controversial measure scans photos on an iPhone, compares them with a database of known child sexual abuse material (CSAM) and notifies Apple if a match is found. The technology is known as client-side scanning or CSS.

Powerful forces in government and the tech industry are now lobbying hard for CSS to become mandatory on all smartphones. Their argument is that instead of weakening encryption or providing law enforcement with backdoor keys, CSS would enable on-device analysis of data in the clear (ie before it becomes encrypted by an app such as WhatsApp or iMessage). If targeted information were detected, its existence and, potentially, its source would be revealed to the agencies; otherwise, little or no information would leave the client device.

CSS evangelists claim that it’s a win-win proposition: providing a solution to the encryption v public safety debate by offering privacy (unimpeded end-to-end encryption) and the ability to successfully investigate serious crime. What’s not to like? Plenty, says an academic paper by some of the world’s leading computer security experts published last week.

The drive behind the CSS lobbying is that the scanning software be installed on all smartphones rather than installed covertly on the devices of suspects or by court order on those of ex-offenders. Such universal deployment would threaten the security of law-abiding citizens as well as lawbreakers. And even though CSS still allows end-to-end encryption, this is moot if the message has already been scanned for targeted content before it was dispatched. Similarly, while Apple’s implementation of the technology simply scans for images, it doesn’t take much to imagine political regimes scanning text for names, memes, political views and so on.

In reality, CSS is a technology for what in the security world is called “bulk interception”. Because it would give government agencies access to private content, it should really be treated like wiretapping and regulated accordingly. And in jurisdictions where bulk interception is already prohibited, bulk CSS should be prohibited as well.

In the longer view of the evolution of digital technology, though, CSS is just the latest step in the inexorable intrusion of surveillance devices into our lives. The trend that started with reading our emails, moved on to logging our searches and our browsing clickstreams, mining our online activity to create profiles for targeting advertising at us and using facial recognition to allow us into our offices now continues by breaching the home with “smart” devices relaying everything back to motherships in the “cloud” and, if CSS were to be sanctioned, penetrating right into our pockets, purses and handbags. That leaves only one remaining barrier: the human skull. But, rest assured, Elon Musk undoubtedly has a plan for that too.

What I’ve been reading

Wheels within wheels
I’m not an indoor cyclist but if I were, The Counterintuitive Mechanics of Peloton Addiction, a confessional blogpost by Anne Helen Petersen, might give me pause.

Get out of here
The Last Days of Intervention is a long and thoughtful essay in Foreign Affairs by Rory Stewart, one of the few British politicians who always talked sense about Afghanistan.

The insider
Blowing the Whistle on Facebook Is Just the First Step is a bracing piece by Maria Farrell in the Conversationalist about the Facebook whistleblower.

Source link

Continue Reading

Technology

Criminals use fake AI voice to swindle UAE bank out of $35m • The Register

Voice Of EU

Published

on

In brief Authorities in the United Arab Emirates have requested the US Department of Justice’s help in probing a case involving a bank manager who was swindled into transferring $35m to criminals by someone using a fake AI-generated voice.

The employee received a call to move the company-owned funds by someone purporting to be a director from the business. He also previously saw emails that showed the company was planning to use the money for an acquisition, and had hired a lawyer to coordinate the process. When the sham director instructed him to transfer the money, he did so thinking it was a legitimate request.

But it was all a scam, according to US court documents reported by Forbes. The criminals used “deep voice technology to simulate the voice of the director,” it said. Now officials from the UAE have asked the DoJ to hand over details of two US bank accounts, where over $400,000 from the stolen money were deposited.

Investigators believe there are at least 17 people involved in the heist.

AI systems need to see the human perspective

Facebook has teamed up with 13 universities across nine countries to compile Ego4D, a dataset containing more than 2,200 hours of video shot in first-person, where 700 participants were filmed performing everyday activities like cooking or playing video games.

The antisocial network is hoping Ego4D will unlock new capabilities in augmented and virtual reality or robotics. New models trained on this data can be tested on a range of tasks, including episodic memory, predicting what happens next, coordinating hand movement to manipulate objects, and social interaction.

“Imagine your AR device displaying exactly how to hold the sticks during a drum lesson, guiding you through a recipe, helping you find your lost keys, or recalling memories as holograms that come to life in front of you,” Facebook said in a blog post.

“Next-generation AI systems will need to learn from an entirely different kind of data – videos that show the world from the center of the action, rather than the sidelines,” added Kristen Grauman, lead research scientist at Facebook.

Researchers will have access to Ego4D later next month subject to a data use agreement.

Microsoft Translator’s AI software

Microsoft Translator, language translation software powered by neural networks, can now translate over 100 different languages.

Twelve new languages and dialects were added to Microsoft Translator this week, including: endangered ones like Bashkir spoken by a Kipchak Turkic ethnic group indigenous to Russia to more common lingos like Mongolian. Microsoft Translator now supports 103 languages.

“One hundred languages is a good milestone for us to achieve our ambition for everyone to be able to communicate regardless of the language they speak,” said Xuedong Huang, Microsoft technical fellow and Azure AI chief technology officer.

Xuedong said the software is based on a multilingual AI model called Z-code. The system deals with text, and is part of Microsoft’s efforts to build a larger multimodal system capable of handling images, text, and audio dubbed the XYZ-code vision. Microsoft Translator is deployed in a range of services, including search engine Bing and offered as an API on its cloud platform Azure Cognitive Services.

ShotSpotter sues Vice for defamation and wants $300m in damages

The controversial AI gunshot-detection company Shotspotter has sued Vice, claiming its business has been unfairly tarnished by a series of articles published by the news outlet.

“On July 26, 2021, Vice launched a defamatory campaign in which it falsely accused ShotSpotter of conspiring with police to fabricate and alter evidence to frame Black men for crimes they did not commit,” the complaint said.

ShotSpotter accused the publication of portraying the company’s technology and actions inaccurately to “cultivate a ‘subversive’ brand” used to sell products advertised in its “sponsored content”.

The company made headlines when evidence used to try to prove a Black man shot and killed another man in a court trial was retracted. The defense lawyer accused ShotSpotter employees of tampering with the evidence to support the police’s case. Vice allegedly made false claims that the biz routinely used its software to tag loud sounds as gunshots to help law enforcement prosecute innocent suspects in shooting cases.

When Vice’s journalists were given proof to show that wasn’t the case, they refused to correct their factual inaccuracies, the lawsuit claimed. ShotSpotter argued the articles had ruined its reputation and now it wants Vice to cough up a whopping $300m in damages.

State of AI 2021

The annual State of AI report is out, compiled by two British tech investors, recapping this year’s trends and developments in AI.

The fourth report from Nathan Benaich, a VC at Air Street Capital, and Ian Hogarth, co-founder of music app Songkick and an angel investor, focuses on transformers, a type of machine learning architecture best known for powering giant language models like OpenAI’s GPT-3 or Google’s BERT.

Transformers aren’t just useful for generating text; they’ve proven adept in other areas, like computer vision or biology too. Machine learning technology is also continuing to mature – developers are deploying more systems to tackle real-world problems such as optimising energy through national electric grids or warehouse logistics for supermarkets.

That also applies to military applications, the pair warned. “AI researchers have traditionally seen the AI arms race as a figurative one – simulated dogfights between competing AI systems carried out in labs – but that is changing with reports of recent use of autonomous weapons by various militaries.”

You can read the full report here. ®

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!