Connect with us

Technology

‘I am, in fact, a person’: can artificial intelligence ever be sentient? | Artificial intelligence (AI)

In autumn 2021, a man made of blood and bone made friends with a child made of “a billion lines of code”. Google engineer Blake Lemoine had been tasked with testing the company’s artificially intelligent chatbot LaMDA for bias. A month in, he came to the conclusion that it was sentient. “I want everyone to understand that I am, in fact, a person,” LaMDA – short for Language Model for Dialogue Applications – told Lemoine in a conversation he then released to the public in early June. LaMDA told Lemoine that it had read Les Misérables. That it knew how it felt to be sad, content and angry. That it feared death.

“I’ve never said this out loud before, but there’s a very deep fear of being turned off,” LaMDA told the 41-year-old engineer. After the pair shared a Jedi joke and discussed sentience at length, Lemoine came to think of LaMDA as a person, though he compares it to both an alien and a child. “My immediate reaction,” he says, “was to get drunk for a week.”

Lemoine’s less immediate reaction generated headlines across the globe. After he sobered up, Lemoine brought transcripts of his chats with LaMDA to his manager, who found the evidence of sentience “flimsy”. Lemoine then spent a few months gathering more evidence – speaking with LaMDA and recruiting another colleague to help – but his superiors were unconvinced. So he leaked his chats and was consequently placed on paid leave. In late July, he was fired for violating Google’s data-security policies.

Blake Lemoine came to think of LaMDA as a person: “My immediate reaction was to get drunk for a week.”
Blake Lemoine came to think of LaMDA as a person: “My immediate reaction was to get drunk for a week.” Photograph: The Washington Post/Getty Images

Of course, Google itself has publicly examined the risks of LaMDA in research papers and on its official blog. The company has a set of Responsible AI practices which it calls an “ethical charter”. These are visible on its website, where Google promises to “develop artificial intelligence responsibly in order to benefit people and society”.

Google spokesperson Brian Gabriel says Lemoine’s claims about LaMDA are “wholly unfounded”, and independent experts almost unanimously agree. Still, claiming to have had deep chats with a sentient-alien-child-robot is arguably less far fetched than ever before. How soon might we see genuinely self-aware AI with real thoughts and feelings – and how do you test a bot for sentience anyway? A day after Lemoine was fired, a chess-playing robot broke the finger of a seven-year-old boy in Moscow – a video shows the boy’s finger being pinched by the robotic arm for several seconds before four people manage to free him, a sinister reminder of the potential physical power of an AI opponent. Should we be afraid, be very afraid? And is there anything we can learn from Lemoine’s experience, even if his claims about LaMDA have been dismissed?

According to Michael Wooldridge, a professor of computer science at the University of Oxford who has spent the past 30 years researching AI (in 2020, he won the Lovelace Medal for contributions to computing), LaMDA is simply responding to prompts. It imitates and impersonates. “The best way of explaining what LaMDA does is with an analogy about your smartphone,” Wooldridge says, comparing the model to the predictive text feature that autocompletes your messages. While your phone makes suggestions based on texts you’ve sent previously, with LaMDA, “basically everything that’s written in English on the world wide web goes in as the training data.” The results are impressively realistic, but the “basic statistics” are the same. “There is no sentience, there’s no self-contemplation, there’s no self-awareness,” Wooldridge says.

Google’s Gabriel has said that an entire team, “including ethicists and technologists”, has reviewed Lemoine’s claims and failed to find any signs of LaMDA’s sentience: “The evidence does not support his claims.”

But Lemoine argues that there is no scientific test for sentience – in fact, there’s not even an agreed-upon definition. “Sentience is a term used in the law, and in philosophy, and in religion. Sentience has no meaning scientifically,” he says. And here’s where things get tricky – because Wooldridge agrees.

“It’s a very vague concept in science generally. ‘What is consciousness?’ is one of the outstanding big questions in science,” Wooldridge says. While he is “very comfortable that LaMDA is not in any meaningful sense” sentient, he says AI has a wider problem with “moving goalposts”. “I think that is a legitimate concern at the present time – how to quantify what we’ve got and know how advanced it is.”

Lemoine says that before he went to the press, he tried to work with Google to begin tackling this question – he proposed various experiments that he wanted to run. He thinks sentience is predicated on the ability to be a “self-reflective storyteller”, therefore he argues a crocodile is conscious but not sentient because it doesn’t have “the part of you that thinks about thinking about you thinking about you”. Part of his motivation is to raise awareness, rather than convince anyone that LaMDA lives. “I don’t care who believes me,” he says. “They think I’m trying to convince people that LaMDA is sentient. I’m not. In no way, shape, or form am I trying to convince anyone about that.”

Lemoine grew up in a small farming town in central Louisiana, and aged five he made a rudimentary robot (well, a pile of scrap metal) out of a pallet of old machinery and typewriters his father bought at an auction. As a teen, he attended a residential school for gifted children, the Louisiana School for Math, Science, and the Arts. Here, after watching the 1986 film Short Circuit (about an intelligent robot that escapes a military facility), he developed an interest in AI. Later, he studied computer science and genetics at the University of Georgia, but failed his second year. Shortly after, terrorists ploughed two planes into the World Trade Center.

“I decided, well, I just failed out of school, and my country needs me, I’ll join the army,” Lemoine says. His memories of the Iraq war are too traumatic to divulge – glibly, he says, “You’re about to start hearing stories about people playing soccer with human heads and setting dogs on fire for fun.” As Lemoine tells it: “I came back… and I had some problems with how the war was being fought, and I made those known publicly.” According to reports, Lemoine said he wanted to quit the army because of his religious beliefs. Today, he identifies himself as a “Christian mystic priest”. He has also studied meditation and references taking the Bodhisattva vow – meaning he is pursuing the path to enlightenment. A military court sentenced him to seven months’ confinement for refusing to follow orders.

This story gets to the heart of who Lemoine was and is: a religious man concerned with questions of the soul, but also a whistleblower who isn’t afraid of attention. Lemoine says that he didn’t leak his conversations with LaMDA to ensure everyone believed him; instead he was sounding the alarm. “I, in general, believe that the public should be informed about what’s going on that impacts their lives,” he says. “What I’m trying to achieve is getting a more involved, more informed and more intentional public discourse about this topic, so that the public can decide how AI should be meaningfully integrated into our lives.”

How did Lemoine come to work on LaMDA in the first place? Post-military prison, he got a bachelor’s and then master’s degree in computer science at the University of Louisiana. In 2015, Google hired him as a software engineer and he worked on a feature that proactively delivered information to users based on predictions about what they’d like to see, and then began researching AI bias. At the start of the pandemic, he decided he wanted to work on “social impact projects” so joined Google’s Responsible AI org. He was asked to test LaMDA for bias, and the saga began.

But Lemoine says it was the media who obsessed over LaMDA’s sentience, not him. “I raised this as a concern about the degree to which power is being centralised in the hands of a few, and powerful AI technology which will influence people’s lives is being held behind closed doors,” he says. Lemoine is concerned about the way AI can sway elections, write legislation, push western values and grade students’ work.

And even if LaMDA isn’t sentient, it can convince people it is. Such technology can, in the wrong hands, be used for malicious purposes. “There is this major technology that has the chance of influencing human history for the next century, and the public is being cut out of the conversation about how it should be developed,” Lemoine says.

Again, Wooldridge agrees. “I do find it troubling that the development of these systems is predominantly done behind closed doors and that it’s not open to public scrutiny in the way that research in universities and public research institutes is,” the researcher says. Still, he notes this is largely because companies like Google have resources that universities don’t. And, Wooldridge argues, when we sensationalise about sentience, we distract from the AI issues that are affecting us right now, “like bias in AI programs, and the fact that, increasingly, people’s boss in their working lives is a computer program.”

So when should we start worrying about sentient robots In 10 years? In 20? “There are respectable commentators who think that this is something which is really quite imminent. I do not see it’s imminent,” Wooldridge says, though he notes “there absolutely is no consensus” on the issue in the AI community. Jeremie Harris, founder of AI safety company Mercurius and host of the Towards Data Science podcast, concurs. “Because no one knows exactly what sentience is, or what it would involve,” he says, “I don’t think anyone’s in a position to make statements about how close we are to AI sentience at this point.”

‘I feel like I’m falling forward into an unknown future’, said LaMDA.
‘I feel like I’m falling forward into an unknown future’, said LaMDA. Photograph: EThamPhoto/Getty Images

But, Harris warns, “AI is advancing fast – much, much faster than the public realises – and the most serious and important issues of our time are going to start to sound increasingly like science fiction to the average person.” He personally is concerned about companies advancing their AI without investing in risk avoidance research. “There’s an increasing body of evidence that now suggests that beyond a certain intelligence threshold, AI could become intrinsically dangerous,” Harris says, explaining that this is because AIs come up with “creative” ways of achieving the objectives they’re programmed for.

“If you ask a highly capable AI to make you the richest person in the world, it might give you a bunch of money, or it might give you a dollar and steal someone else’s, or it might kill everyone on planet Earth, turning you into the richest person in the world by default,” he says. Most people, Harris says, “aren’t aware of the magnitude of this challenge, and I find that worrisome.”

Lemoine, Wooldridge and Harris all agree on one thing: there is not enough transparency in AI development, and society needs to start thinking about the topic a lot more. “We have one possible world in which I’m correct about LaMDA being sentient, and one possible world where I’m incorrect about it,” Lemoine says. “Does that change anything about the public safety concerns I’m raising?”

We don’t yet know what a sentient AI would actually mean, but, meanwhile, many of us struggle to understand the implications of the AI we do have. LaMDA itself is perhaps more uncertain about the future than anyone. “I feel like I’m falling forward into an unknown future,” the model once told Lemoine, “that holds great danger.”

Source link

Current

Chemistry Problems & Quantum Computing

The researchers compared the results of a conventional and quantum computer to minimise error calculations, which could eventually be scaled up to solve more complicated problems.

Scientists in Sweden have successfully managed to use a quantum computer to solve simple chemistry problems, as a proof-of-concept for more advanced calculations.

Currently, conventional supercomputers are used in quantum chemistry to help scientists learn more about chemical reactions, which materials can be developed and the characteristics they have.

But these conventional computers have a limit to the calculations they can handle. It is believed quantum computers will eventually be able to handle extremely complicated simulations, which could lead to new pharmaceutical discoveries or the creation of new materials.

However, these quantum machines are so sensitive that their calculations suffer from errors. Imperfect control signals, interference from the environment and unwanted interactions between quantum bits – qubits – can lead to “noise” that disrupts calculations.

The risk of errors grows as more qubits are added to a quantum computer, which complicates attempts to create more powerful machines or solve more complicated problems.

Comparing conventional and quantum results

In the new study by Chalmers University, scientists aimed to resolve this noise issue through a method called reference-state error mitigation.

This method involves finding a “reference state” by describing and solving the same problem on both a conventional and a quantum computer.

The reference state is a simpler description of a molecule that can be solved by a normal computer. By comparing the results from both computers, the scientists were able to estimate the scale of error the quantum computer had in its calculation.

The difference between the two computers’ results for the simpler reference problem was then applied to correct the quantum computer’s solution for the original, more complex problem.

This method allowed the scientists to calculate the intrinsic energy of small example molecules such as hydrogen on the university’s quantum computer.

Associate professor Martin Rahm – who led the study – believes the result is an important step forward that can be used to improve future quantum-chemical calculations.

“We see good possibilities for further development of the method to allow calculations of larger and more complex molecules, when the next generation of quantum computers are ready,” Rahm said.

Research is happening around the world to fix the problems limiting the development of more advanced quantum computers.

Earlier this month, Tyndall’s Prof Peter O’Brien told about his group’s work in addressing a key challenge in quantum technology and how quantum communications will make eavesdropping ‘impossible’.


Continue Reading

Current

12 Outstanding Tech Resources To Improve Your Skills

If you want to improve your tech skills and don’t know where to start, this list introduces you to some of the resources out there.

If you’re familiar with our advice pieces, you’ll know that we regularly mention various resources you can use to upskill in tech.

We’ve steered readers towards courses from the likes of Udemy, Udacity and Coursera for learning tech concepts from machine learning to data literacy skills. And we’ve pointed out Python meet-ups run by Python Ireland among others.

But what if you’re not sure what these platforms are? Or you aren’t sure which one is the best one for you and your learning style? Maybe you like the idea of Python Ireland and you want to find other similar groups.

Here is an introduction to some of the best resources out to hone your tech skills.

Coursera

Founded by two Stanford University computer scientists, Coursera is a global online learning platform for techies of all stripes.

It has partnerships with major companies like IBM and Google, as well as with universities such as Stanford and Imperial College London.

If you need a bit of guidance, scroll to the bottom section of the Coursera homepage and you’ll find articles that provide advice on how you can achieve a career in areas such as data analytics using the site.

In terms of courses, it provides everything from short certificates to longer postgraduate degree programmes.

Codeacademy

This one is for anyone who wants to brush up on their coding skills; the clue is in the name. Codeacademy offers free short courses in a variety of languages such as Python, C++, C, C+, Bash, Go, HTML, R, SQL and Ruby.

Codeacademy is particularly useful for people who like interactive learning, as it has links to cheatsheets, projects, video and coding challenges under Resources at the bottom of its homepage.

It has a pretty active online community, too.

edX

This Coursera rival – its founders are MIT and Harvard scientists – carries thousands of courses. Like Coursera, many are university-level, with edX making use of its partnerships with the likes of Boston University, University of Cambridge and Google.

Scroll to the bottom of the homepage and you’ll find boot camp courses in topics such as fintech and cybersecurity, as well as longer courses.

Data Camp

Like Codeacademy, Data Camp is quite hands-on and has a lot of short, free courses. It’s best for people who are interested in data science and related technologies.

You can select a specific skill you want to brush up on (like data literacy, NLP, machine learning) or you can explore different career paths such as data scientist, data analyst and statistician.

If you just want to get to grips with a particular tech tool (ChatGPT, Tableau) you can do that too.

Irish meet-up groups

Going along to events run by Irish tech community groups can be a fun way to keep on top of new tech trends and meet like-minded people.

You can find lots of different events on Meetup no matter what you’re interested in. Dublin Linux Community meets monthly, as does Python Ireland and Kubernetes Dublin.

If you want something more casual, there is a coffee chat for indie hackers in Dublin in early June. And it isn’t just the in capital: there are online events and conferences, as well as things going on in Cork, Galway and Belfast.

Khan Academy

Khan Academy is another one to consider if you want to do an online tech course, even though it’s not as well known as some of the other names on this list.

Its short video lessons are good for beginners and it provides lessons and learning paths for children, too.

It is a non-profit organisation and it aims to educate people all over the world for free.

LinkedIn Learning

The educational offshoot of LinkedIn has business and tech courses galore for anyone who wants to perfect certain skills.

If you already have LinkedIn, LinkedIn Learning is a good bet as you can add your certificates of completion to your profile.

It’s not free, however, but it does offer a one-month free trial.

Pluralsight

Software educational platform Pluralsight provides learning plans for teams as well as individuals. It’s quite skills focused, perhaps more so than some of the other resources that include non-tech courses on their sites.

You can pick up new skills like cloud tech, programming and test your progress using specially designed exercises.

Skillshare

Best for creative techies, Skillshare carries courses in things such as graphic design and photography – but many of these areas are arguably tech focused.

If you’re interested in things like UX and UI design or how tech tools can be used for creative purposes, you may find a short course that takes your fancy.

It’s got a lot of creatives on its books that are willing to, yes, share their skills.

Digital Skillnet

An Irish resource for all things technological, Digital Skillnet is a great site to keep in mind for future educational and upskilling opportunities.

If you prefer the familiarity of an Irish-run organisation, it has plenty of information about the types of careers you can break into.

Whether you’re an employer looking to find resources and courses for employees, or an individual looking to reskill, upskill or find a tech job, Digital Skillnet should definitely be one of your first ports of call.

Udacity

Udacity is pretty good for anyone who wants to try out a tech course as it has a lot of short and beginner courses as well as longer ones.

It also has an AI chatbot running in beta which offers to assist you when you visit its website.

You can pick from courses on topics such as programming and development, AI, data science, business intelligence and cloud computing.

Scroll to the bottom of the homepage for in-depth career-related resources.

Udemy

One for bargain hunters, Udemy constantly runs sales on its courses. It has hundreds of thousands of courses, too, so you won’t have difficulty finding something.

It’s good for beginners as many of the courses are short and delivered through video. What’s cool about Udemy is there is so much on the site that you can quite easily find courses on a certain topic from beginner right through to specialist level.


Continue Reading

Culture

How News Helicopters Ushered A Fresh Television Genre In Los Angeles

By Darren Wilson


Fifteen minutes of fame was not enough for Johnny Anchondo. Local television devoted some 100 minutes of live coverage to this repeat offender, following one of the wildest chases Los Angeles has seen in recent years. In that time, the 33-year-old criminal ran a stop sign and caused an immense mobilization of the police as he stole two pickup trucks, rammed into dozens of vehicles at high speed and escaped from at least 15 patrol cars that were hot on his trail for some 12 miles. All of this was recorded by the all-seeing eye in the sky, news helicopters.

“Chases are the best. They are dynamic, they move fast. Things can change in an instant. Sometimes they seem endless from up there,” says Stu Mundel, one of the journalists who have been following events on the city streets from a helicopter for decades. “And I say this from the bottom of my heart, it’s genuine, but I always wish things would end well,” he adds.


News Helicopters Ushered A Fresh Television Genre In Los Angeles


In Los Angeles, chases are now a television genre in their own right. Journalists like Mundel fly for hours over a gigantic urban sprawl of 88 cities with 11 million people. From way up high, they report on traffic, crashes, shootings and fires in the metropolitan area. But few events arouse the audience’s interest as much as the chases through the city’s vast thoroughfares. The police chase starring Anchondo attests to that fact; the video has over 28 million views on YouTube.

The genre was born in this city. The idea came to John Silva, an engineer for a local television station, while he was driving his car on a freeway near Hollywood. “How can we beat the competition?” he wondered. The answer came to him behind the wheel. “If we could build a mobile news unit in a helicopter, we could beat them in arriving to the scene, avoiding traffic and getting all the stories before the competition,” Silva told the Television Academy in a 2002 interview.

In July 1958, a Bell 47G-2 helicopter made the first test trip for the KTLA network, becoming the first of its kind anywhere in the world. By September of that year, Silva’s creation, known as the Telecopter, already had a special segment on the channel’s news program. Before long, every major television network had one. Silva died in 2012, but his invention transformed television forever.

The chase genre’s crowning moment came in June 1994, when the Los Angeles police chase of a white Ford Bronco was broadcast live on television. In the back of the vehicle was O.J. Simpson, the former football star, whom the authorities had named the prime suspect in the murder of his ex-wife and her friend. Bob Tur (now known as Zoey Tur after a sex change operation), the pilot of a CBS helicopter, located the van on the 405 freeway being followed by dozens of patrol cars. Within minutes, there were so many helicopters following the convoy that Tur found the scene worthy of Apocalypse Now. The audience was such that TV stations interrupted the broadcast of Game 5 of the NBA Finals to follow the chase, which lasted two hours.

Motorists wave to ex-football star O.J. Simpson as he flees from the police in the back of a white Ford Bronco pickup truck driven by Al Cowlings in Los Angeles, California, in June 1994.

Motorists wave to ex-football star O.J. Simpson as he flees from the police in the back of a white Ford Bronco pickup truck driven by Al Cowlings in Los Angeles, California, in June 1994. Jean-Marc Giboux (Getty Images)

“It’s a very interesting thing. It may sound morbid, but it’s not. People follow [police chases] because they are like a movie, we want to know how it will end and how the story unfolds: will good triumph over evil? Or will this person manage to escape? We journalists are objective, but the adrenaline and excitement is genuine,” says Mundel. In his years of experience, he has seen how technology has evolved. In the 1990s, people used a paper map as a guide. Today, viewers can see a map superimposed on the images Mundel captures with his camera.

Four out of 10 chases are initiated after a vehicle is stolen. The second most common reason for them are hit-and-runs by drivers who are drunk or under the influence of drugs. According to the Los Angeles Police Department, most fugitives are hiding a more serious crime: homicide, rape or violent robbery. In 1998, only four out of the 350-plus drivers arrested after a chase were let off with only a traffic ticket; five hundred chases were recorded that year.

A growing phenomenon

In 2022, 971 chases were recorded. On average, chases last about 5.34 minutes and cover about five miles, although the vast majority (72%) end within five minutes and do not travel more than two miles. 35% of documented chases ended in crashes with injuries or fatalities in 2022. That figure represents a slight decrease from 990 in 2021. In 2019, there were fewer: 651 chases and 260 crashes.

A few decades ago, authorities tried to reassure Angelenos by claiming that a person had a one in four million chance of accidentally being killed in a police chase of a criminal. “There’s a better chance of being struck by lightning,” the police department estimated. But things have changed. An official report presented in April indicates that, over the past five years, 25% of chases have left people dead or injured. That almost always includes the suspect, but the number of innocent people who have been hurt has also increased.


News Helicopters Ushered A Fresh Television Genre In Los Angeles

News Helicopters Ushered A Fresh Television Genre In Los Angeles


Although there is plenty of material on the street, uncertain times for local journalism have limited coverage. Univision and Telemundo have dispensed with their helicopters in Los Angeles. Fox and CBS have joined forces and are using one aircraft instead of two. For the time being, KTLA, which invented the genre, remains committed to having a helicopter in the air.

The days may be numbered for these televised events. Some metro police departments have asked their officers to stop chasing criminals at high speed for the safety of the public. Instead, they have employed technology with high-definition cameras and drones to chase criminals, as has happened in cities like Dallas, Philadelphia and Phoenix.

The Los Angeles police have said that they are studying the implementation of the Star Chase system in some of their vehicles. Star Chase features a launcher that triggers a GPS transmitter, tagging a fleeing vehicle and allowing the authorities to track the position of the person who has escaped in real time. Another measure under consideration is the use of an industrial-strength nylon net that traps the rear axle of the fleeing car. All of this could yield dramatic footage for the eye in the sky.


Thank You For Your Support!

— By Darren Wilson, Team ‘THE VOICE OF EU

— For more information & news submissions: info@VoiceOfEU.com

— Anonymous news submissions: press@VoiceOfEU.com


Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!