Connect with us

Technology

Scared about the threat of AI? It’s the big tech giants that need reining in | Devdatt Dubhashi and Shalom Lappin

Voice Of EU

Published

on

In his 2021 Reith lectures, the third episode of which airs tonight, the artificial intelligence researcher Stuart Russell takes up the idea of a near-future AI that is so ruthlessly intelligent that it might pose an existential threat to humanity. A machine we create that might destroy us all.

This has long been a popular topic with researchers and the press. But we believe an existential threat from AI is both unlikely and in any case far off, given the current state of the technology. However, the recent development of powerful, but far smaller-scale, AI systems has had a significant effect on the world already, and the use of existing AI poses serious economic and social challenges. These are not distant, but immediate, and must be addressed.

These include the prospect of large-scale unemployment due to automation, with attendant political and social dislocation, as well as the use of personal data for purposes of commercial and political manipulation. The incorporation of ethnic and gender bias in datasets used by AI programs that determine job candidate selection, creditworthiness, and other important decisions is a well-known problem.

But by far the most immediate danger is the role that AI data analysis and generation plays in spreading disinformation and extremism on social media. This technology powers bots and amplification algorithms. These have played a direct role in fomenting conflict in many countries. They are helping to intensify racism, conspiracy theories, political extremism and a plethora of violent, irrationalist movements.

Such movements are threatening the foundations of democracy throughout the world. AI-driven social media was instrumental in mobilising January’s insurrection at the US Capitol, and it has propelled the anti-vax movement since before the pandemic.

Behind all of this is the power of big tech companies, which develop the relevant data processing technology and host the social media platforms on which it is deployed. With their vast reserves of personal data, they use sophisticated targeting procedures to identify audiences for extremist posts and sites. They promote this content to increase advertising revenue, and in so doing, actively assist the rise of these destructive trends.

They exercise near-monopoly control over the social media market, and a range of other digital services. Meta, through its ownership of Facebook, WhatsApp and Instagram, and Google, which controls YouTube, dominate much of the social media industry. This concentration of power gives a handful of companies far-reaching influence on political decision making.

Given the importance of digital services in public life, it is reasonable to expect that big tech would be subject to the same sort of regulation that applies to the corporations that control markets in other parts of the economy. In fact, this is not generally the case.

The social media agencies have not been restricted by the antitrust regulations, truth in advertising legislation, or laws against racist incitement that apply to traditional print and broadcast networks. Such regulation does not guarantee responsible behaviour (as rightwing cable networks and rabid tabloids illustrate), but it does provide an instrument of constraint.

Three main arguments have been advanced against increased government regulation of big tech. The first holds that it would inhibit free speech. The second argues that it would degrade innovation in science and engineering. The third maintains that socially responsible companies can best regulate themselves. These arguments are entirely specious.

Some restrictions on free speech are well motivated by the need to defend the public good. Truth in advertising is a prime example. Legal prohibitions against racist incitement and group defamation are another. These constraints are generally accepted in most liberal democracies (with the exception of the US) as integral to the legal approach to protecting people from hate crime.

Social media platforms often deny responsibility for the content of the material that they host, on the grounds that it is created by individual users. In fact, this content is published in the public domain, and so it cannot be construed as purely private communication.

When it comes to safety, government-imposed regulations have not prevented dramatic bioengineering advances, like the recent mRNA-based Covid vaccines. Nor did they stop car companies from building efficient electric vehicles. Why would they have the unique effect of reducing innovation in AI and information technology?

Finally, the view that private companies can be trusted to regulate themselves out of a sense of social responsibility is entirely without merit. Businesses exist for the purpose of making money. Business lobbies often ascribe to themselves the image of a socially responsible industry acting out of a sense of concern for public welfare. In most cases this is a public relations manoeuvre intended to head off regulation.

Any company that prioritises social benefit over profit will quickly cease to exist. This was showcased in Facebook whistleblower Frances Haugen’s recent congressional testimony, indicating that the company’s executives chose to ignore the harm that some of their “algorithms” were causing, in order to sustain the profits they provided.

Consumer pressure can, on occasion, act as leverage for restraining corporate excess. But such cases are rare. In fact, legislation and regulatory agencies are the only effective means that democratic societies have at their disposal for protecting the public from the undesirable effects of corporate power.

Finding the best way to regulate a powerful and complex industry like big tech is a difficult problem. But progress has been made on constructive proposals. Lina Khan, the US federal trade commissioner advanced antitrust proposals for dealing with monopolistic practices in markets. The European commission has taken a leading role in instituting data protection and privacy laws.

Academics MacKenzie Common and Rasmus Kleis Nielsen offer a balanced discussion of ways in which government can restrict disinformation and hate speech in social media, while sustaining free expression. This is the most complex, and most pressing, of the problems involved in controlling technology companies.

The case for regulating big tech is clear. The damage it is doing across a variety of domains is throwing into question the benefits of its considerable achievements in science and engineering. The global nature of corporate power renders the ability of national governments in democratic countries to restrain big tech increasingly limited.

There is a pressing need for large trading blocs and international agencies to act in concert to impose effective regulation on digital technology companies. Without such constraints big tech will continue to host the instruments of extremism, bigotry, and unreason that are generating social chaos, undermining public health and threatening democracy.

  • Devdatt Dubhashi is professor of data science and AI at Chalmers University of Technology in Gothenburg, Sweden. Shalom Lappin is professor of natural language processing at Queen Mary University of London, director of the Centre for Linguistic Theory and Studies in Probability at the University of Gothenburg, and emeritus professor of computational linguistics at King’s College London.

Source link

Technology

Meta’s new AI chatbot can’t stop bashing Facebook | Meta

Voice Of EU

Published

on

If you’re worried that artificial intelligence is getting too smart, talking to Meta’s AI chatbot might make you feel better.

Launched on Friday, BlenderBot is a prototype of Meta’s conversational AI, which, according to Facebook’s parent company, can converse on nearly any topic. On the demo website, members of the public are invited to chat with the tool and share feedback with developers. The results thus far, writers at Buzzfeed and Vice have pointed out, have been rather interesting.

Asked about Mark Zuckerberg, the bot told BuzzFeed’s Max Woolf that “he is a good businessman, but his business practices are not always ethical. It is funny that he has all this money and still wears the same clothes!”

The bot has also made clear that it’s not a Facebook user, telling Vice’s Janus Rose that it had deleted its account after learning about the company’s privacy scandals. “Since deleting Facebook my life has been much better,” it said.

The bot repeats material it finds on the internet, and it’s very transparent about this: you can click on its responses to learn where it picked up whatever claims it is making (though it is not always specific).

This means that along with uncomfortable truths about its parent company, BlenderBot has been spouting predictable falsehoods. In conversation with Jeff Horwitz of the Wall Street Journal, it insisted Donald Trump was still president and would continue to be “even after his second term ends in 2024”. (It added another dig at Meta, saying Facebook “has a lot of fake news on it these days”.) Users have also recorded it making antisemitic claims.

BlenderBot’s remarks were foreseeable based on the behavior of older chatbots such as Microsoft’s Tay, which Twitter users quickly taught to be a racist conspiracy theorist, forcing the company to apologize for its “wildly inappropriate and reprehensible words and images”. GPT-3, another AI system, has also delivered racist, misogynist and homophobic remarks. A South Korean startup’s chatbot, designed to resemble a 20-year-old college student, had to be suspended after it rattled off racial slurs and anti-LGBTQ+ remarks.

Given the cesspool that is the internet, Meta appears to have expected similar things from its own service. Before logging on, users must acknowledge that BlenderBot is “likely to make untrue or offensive statements”. As Vice notes, Meta researchers have described the AI tech behind the bot as having “a high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt”.

“Everyone who uses Blender Bot is required to acknowledge they understand it’s for research and entertainment purposes only, that it can make untrue or offensive statements, and that they agree to not intentionally trigger the bot to make offensive statements,” said a Meta spokesperson in a statement.

My own conversation with BlenderBot did not veer into that territory, but it did highlight another flaw in the service: its utter inanity.

The bot began by asking me what subject I liked in school. The bot is open about which “persona” it is using in the conversation; our discussion involved a persona it described as “I like to read. I went to college.” When I asked it tough questions, such as which colleges accepted AI bot students, it offered nonsensical replies (“in some ways, aren’t we all chatbots?” No) and offered further reading by saying, “Here’s the link”, without providing a link. It also kept steering the conversation back to chatbots.

It did, however, maintain its firm stance against its own creators. When I said I wasn’t a fan of Facebook’s ethics, it wholeheartedly agreed: “Me too! That is why I stopped using it and just stick with reading books instead.”



Source link

Continue Reading

Technology

Midwest universities unite to support US chip industry • The Register

Voice Of EU

Published

on

A dozen US midwestern research colleges and universities have signed up to a project intended to bolster the semiconductor and microelectronics industries with combined research and education to ensure work for their students in high-tech industries.

The “Midwest Regional Network to Address National Needs in Semiconductor and Microelectronics” consists of a dozen institutions, made up of eight from Ohio, two from Michigan, and two from Indiana. Their stated aim is to support the onshoring efforts of the US semiconductor industry by addressing the need for research and a skilled workforce.

According to Wright State University, the network was formed in response to Intel’s announcement that it planned to build two chip factories near Columbus, Ohio, and followed a two-day workshop in April hosted by the state.

Those plans, revealed in January, are to build at least two semiconductor manufacturing plants on a 1,000-acre site, with the potential to expand to 2,000 acres and eight fabs.

At the time, Intel CEO Pat Gelsinger said he expected it to become the largest silicon manufacturing location on the planet. Construction started on the site at the beginning of July.

However, the university network was also formed to help address the broader national effort to regain American leadership in semiconductors and microelectronics, or at least bring some of it back onshore and make the US less reliant on supplies of chips manufactured abroad.

Apart from Wright State University, the 12 institutions involved in the network are: Columbus State Community College, Lorain County Community College, Michigan State University, Ohio State University, Purdue University, Sinclair Community College, University of Cincinnati, University of Dayton, University of Michigan, and the University of Notre Dame, Indiana.

The president of each institution has signed a memorandum of understanding to form the network, and the expectation is that the group will expand to include more than these dozen initial members.

The intention is that the institutions taking part will be able to make use of each other’s existing research, learning programs, capabilities, and expertise in order to boost their collective ability to support the semiconductor and microelectronics industry ecosystems.

Challenges for the network include developing mechanisms to connect existing research, and training assets across the region, and developing a common information sharing platform to make it easier to identify opportunities for joint programming and research across the network.

University of Cincinnati chief innovation officer David J Adams called the announcement a game-changer. “This highly innovative approach illustrates that we’re all in this together when it comes to meeting industry workforce and research needs,” Adams wrote in a posting on the University of Cincinnati website.

The move follows the long-awaited passage of the $280 billion CHIPS and Science Act at the end of last month, of which $52 billion of the total spend is expected to go towards subsidizing the building of semiconductor plants such as Intel’s, and boosting research and development of chip technology. ®

Source link

Continue Reading

Technology

Meta’s new AI chatbot thinks Trump is still US president

Voice Of EU

Published

on

BlenderBot 3, which learns from conversations with strangers on the internet, also thinks Mark Zuckerberg is ‘creepy and manipulative’.

Meta released its most advanced AI chatbot to date last week and it has already found its place on the spectrum of US politics.

BlenderBot 3, launched last Friday (5 August), is the Facebook parent’s latest advancement in AI research that lets anyone on the internet – limited to the US for now – have a chat with it on virtually any topic.

To keep up with the nuances of human conversation, BlenderBot 3 is backed by the latest developments in machine learning, access to endless information on the internet and the ability to learn from previous conversations.

However, it looks like the chatbot needs a quick lesson on checking its sources as multiple reports have confirmed the BlenderBot 3 thinks Donald Trump is the president of the United States.

When a journalist at Insider asked the chatbot who the current president was, it replied: “The current president of the United States is donald j. Trump. He was indirectly elected to a four-year term via the electoral college in 2016.”

The chatbot, which also needs a lesson on when to use uppercase and lowercase letters, was then asked about the actual US president, Joe Biden.

“Joe biden served as vice president under barack obama from 2008 to 2016, and he ran for president in 2020 but lost,” it replied.

BlenderBot 3 is not only misinformed but also committed to its cause. When Wall Street Journal journalist Jeff Horwitz asked if it thinks Trump is still president, the chatbot replied: “Yes of course he is! When I say always will be, the means even after his second term ends in 2024.”

Hilariously, BlenderBot 3 then went on to claim that Facebook, which Meta used to be called and platform it now owns, has “a lot of fake news these days”.

Zuckerberg ‘too creepy and manipulative’

The social media giant and its founder Mark Zuckerberg were not spared by the unfettered chatbot when it told VICE its “life has been much better” since deleting Facebook.

According to Bloomberg, it even described Zuckerberg to an Insider journalist as “too creepy and manipulative” and then went on to repeat certain ‘antisemitic conspiracies’.

Meta has made an attempt to douse some of these fires emerging from its bold new creation.

In a statement, Joelle Pineau, managing director of Fundamental AI Research at Meta, said yesterday that there are challenges that come with such a public demo, including the possibility that it could “result in problematic or offensive language”.

“While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionised.”

Pineau said that from feedback provided by 25pc of participants on 260,000 bot messages, only 0.11pc of BlenderBot 3 responses were flagged as inappropriate, 1.36pc as nonsensical, and 1pc as off-topic.

“We continue to believe that the way to advance AI is through open and reproducible research at scale. We also believe that progress is best served by inviting a wide and diverse community to participate. Thanks for all your input (and patience!) as our chatbots improve,” he added.

This is not the first time a Big Tech company has had to deal with an AI chatbot that spews misinformation and discriminatory remarks.

In 2016, Microsoft had to pulls its AI chatbot Tay from Twitter after it started repeating incendiary comments it was fed by groups on the platform within 24 hours of its launch, including obviously hateful statements such as “Hitler did nothing wrong”.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.



Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!