Connect with us

Technology

UK publishes roadmap for ‘AI assurance industry’ • The Register

Voice Of EU

Published

on

The UK government’s Centre for Data Ethics and Innovation (CDEI) has published a “roadmap” designed to create an AI assurance industry to support the introduction of automated analysis, decision making, and processes.

The move is one of several government initiatives planned to help shape local development and use of AI – an industry that attracted £2.5bn investment in 2019 – but it raises as many questions as it answers.

Part of the Department for Digital, Culture, Media & Sport (DCMS), the CDEI said by “verifying that AI systems are effective, trustworthy and compliant, AI assurance services will drive a step-change in adoption, enabling the UK to realise the full potential of AI and develop a competitive edge.”

Launching the move, DCMS minister Chris Philp said: “The roadmap sets out the steps needed to grow a mature, world-class AI assurance industry. AI assurance services will become a key part of the toolkit available to ensure effective, pro-innovation governance of AI.”

How that governance will take shape is, as yet, a bit fuzzy while the industry waits on proposals for AI legislation in the forthcoming White Paper on governance and regulation.

Whatever laws the assurance industry is expected to mitigate against breaching, the idea is that third-party AI assurance providers will offer reliable information about the trustworthiness of AI systems, according to the launch document.

The “roadmap” – awful word, we know – calls for all players in the AI supply chain to “have clearer understanding of AI risks and demand assurance based on their corresponding accountabilities for these risks.”

“AI assurance will be critical to realising the UK government’s ambition to establish the most trusted and pro-innovation system for AI governance in the world, set out in the National AI Strategy,” the document says.

Elsewhere in Whitehall, the Central Digital and Data Office has developed an algorithmic transparency standard for government departments and public-sector bodies. Working with the CDEI, the standard would be piloted by several public-sector organisations and further developed based on feedback, it said.

IT analyst group Forrester has released its own proposals to help businesses navigate something it calls “AI fairness”, a broad concept designed to help organisations travel the dire regulatory, reputational, and revenue impacts of getting AI wrong. “As fairness in AI is a relatively new concept, regulations explicitly dictating a specific fairness metric are lacking and best practices are just emerging,” it said.

Martha Bennett, Forrester veep and principal analyst, said the problem in the UK’s case was that efforts to develop an AI strategy and assurance industry were disjointed by reform to data protection laws, which would govern the use of personal data in developing machine learning models and describe individuals’ rights in their relationship with AI.

Talking about the reforms in August, UK’s then Secretary of State for Digital Oliver Dowden promised “a bold new data regime” following the kingdom’s departure from the EU. It would “unleashes data’s power across the economy and society for the benefit of British citizens and British businesses,” he trilled.

When launching the consultation on the reforms, the government said it was considering removing individuals’ right to challenge decisions made about them by AIs, a move that attracted criticism.

Bennett said: “It’s almost like they haven’t joined the dots somehow. They’re talking in this proposed UK Data Protection revision about amending the right not to be subject to a decision based solely on automated processing and I’ve even heard people say that loosening up on those particular requirements could give the UK a competitive advantage.

“But that to me is a dangerous path to take and to me is a real crunch point because it goes in the opposite direction of where everyone else is going in what we call the explainability of AI models. You should always be in a position to defend a decision. If an individual feels that the decision has been unfairly taken, they should be able to get an explanation and it is possible to make AI systems explainable because you know what the inputs are.”

The UK’s National Data Guardian (NDG), who addresses use of health data, also warned against watering down individuals’ rights to challenge decisions made about them by artificial intelligence.

“The NDG has significant concerns about proposed reductions to existing protections and the ability of professionals, patients, and the public to be actively informed about decisions that can have significant impacts for them,” said Dr Nicola Byrne.

Other leading figures in AI ethics argue for a broader view still. Timnit Gebru, co-lead of Google’s Ethical AI team before her controversial departure, said effective AI regulation should start with labour protections and antitrust measures to guard against overly powerful monopolies.

“I can tell that some people find that answer disappointing – perhaps because they expect me to mention regulations specific to the technology itself. While those are important, the number one thing that would safeguard us from unsafe uses of AI is curbing the power of the companies who develop it and increasing the power of those who speak up against the harms of AI and these companies’ practices,” she wrote in The Guardian.

Gebru – now founder and executive director of the Distributed AI Research Institute – also voiced concerns that big tech companies leading the AI charge could also exert undue influence on government policy.

“I noticed that the same big tech leaders who push out people like me are also the leaders who control big philanthropy and the government’s agenda for the future of AI research. If I speak up and antagonize a potential funder, it is not only my job on the line, but the jobs of others at the institute,” she pointed out.

It is notable in this context that the UK government’s AI strategy was launched with a quote from DeepMind, the UK-based AI outfit owned by Google, which ousted Gebru.

Whatever the government means by creating a “roadmap for a mature, world-class AI assurance industry,” questions remain about what exactly organisations and businesses are to assure against. And that’s not very reassuring. ®

Source link

Technology

Edwards Lifesciences is hiring at its ‘key’ Shannon and Limerick facilities

Voice Of EU

Published

on

The medtech company is hiring for a variety of roles at both its Limerick and Shannon sites, the latter of which is being transformed into a specialised manufacturing facility.

Medical devices giant Edwards Lifesciences began renovations to convert its existing Shannon facility into a specialised manufacturing centre at the end of July.

The expansion will allow the company to produce components that are an integral part of its transcatheter heart valves. The conversion is part of Edwards Lifesciences’ expansion plan that will see it hire for hundreds of new roles in the coming years.

“The expanded capability at our Shannon facility demonstrates that our operations in Ireland are a key enabler for Edwards to continue helping patients across the globe,” said Andrew Walls, general manager for the company’s manufacturing facilities in Ireland.

According to Walls, hiring is currently underway at the company’s Shannon and Limerick facilities for a variety of functions such as assembly and inspection roles, manufacturing and quality engineering, supply chain, warehouse operations and project management.

Why Ireland?

Headquartered in Irvine, California, Edwards Lifesciences established its operations in Shannon in 2018 and announced 600 new jobs for the mid-west region. This number was then doubled a year later when it revealed increased investment in Limerick.

When the Limerick plant was officially opened in October 2021, the medtech company added another 250 roles onto the previously announced 600, promising 850 new jobs by 2025.

“As the company grows and serves even more patients around the world, Edwards conducted a thorough review of its global valve manufacturing network to ensure we have the right facilities and talent to address our future needs,” Walls told SiliconRepublic.com

“We consider multiple factors when determining where we decide to manufacture – for example, a location that will allow us to produce close to where products are utilised, a location that offers advantages for our supply chain, excellent local talent pool for an engaged workforce, an interest in education and good academic infrastructure, and other characteristics that will be good for business and, ultimately, good for patients.

“Both our Shannon and Limerick sites are key enablers for Edwards Lifesciences to continue helping patients across the globe.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Technology

Meta’s new AI chatbot can’t stop bashing Facebook | Meta

Voice Of EU

Published

on

If you’re worried that artificial intelligence is getting too smart, talking to Meta’s AI chatbot might make you feel better.

Launched on Friday, BlenderBot is a prototype of Meta’s conversational AI, which, according to Facebook’s parent company, can converse on nearly any topic. On the demo website, members of the public are invited to chat with the tool and share feedback with developers. The results thus far, writers at Buzzfeed and Vice have pointed out, have been rather interesting.

Asked about Mark Zuckerberg, the bot told BuzzFeed’s Max Woolf that “he is a good businessman, but his business practices are not always ethical. It is funny that he has all this money and still wears the same clothes!”

The bot has also made clear that it’s not a Facebook user, telling Vice’s Janus Rose that it had deleted its account after learning about the company’s privacy scandals. “Since deleting Facebook my life has been much better,” it said.

The bot repeats material it finds on the internet, and it’s very transparent about this: you can click on its responses to learn where it picked up whatever claims it is making (though it is not always specific).

This means that along with uncomfortable truths about its parent company, BlenderBot has been spouting predictable falsehoods. In conversation with Jeff Horwitz of the Wall Street Journal, it insisted Donald Trump was still president and would continue to be “even after his second term ends in 2024”. (It added another dig at Meta, saying Facebook “has a lot of fake news on it these days”.) Users have also recorded it making antisemitic claims.

BlenderBot’s remarks were foreseeable based on the behavior of older chatbots such as Microsoft’s Tay, which Twitter users quickly taught to be a racist conspiracy theorist, forcing the company to apologize for its “wildly inappropriate and reprehensible words and images”. GPT-3, another AI system, has also delivered racist, misogynist and homophobic remarks. A South Korean startup’s chatbot, designed to resemble a 20-year-old college student, had to be suspended after it rattled off racial slurs and anti-LGBTQ+ remarks.

Given the cesspool that is the internet, Meta appears to have expected similar things from its own service. Before logging on, users must acknowledge that BlenderBot is “likely to make untrue or offensive statements”. As Vice notes, Meta researchers have described the AI tech behind the bot as having “a high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt”.

“Everyone who uses Blender Bot is required to acknowledge they understand it’s for research and entertainment purposes only, that it can make untrue or offensive statements, and that they agree to not intentionally trigger the bot to make offensive statements,” said a Meta spokesperson in a statement.

My own conversation with BlenderBot did not veer into that territory, but it did highlight another flaw in the service: its utter inanity.

The bot began by asking me what subject I liked in school. The bot is open about which “persona” it is using in the conversation; our discussion involved a persona it described as “I like to read. I went to college.” When I asked it tough questions, such as which colleges accepted AI bot students, it offered nonsensical replies (“in some ways, aren’t we all chatbots?” No) and offered further reading by saying, “Here’s the link”, without providing a link. It also kept steering the conversation back to chatbots.

It did, however, maintain its firm stance against its own creators. When I said I wasn’t a fan of Facebook’s ethics, it wholeheartedly agreed: “Me too! That is why I stopped using it and just stick with reading books instead.”



Source link

Continue Reading

Technology

Midwest universities unite to support US chip industry • The Register

Voice Of EU

Published

on

A dozen US midwestern research colleges and universities have signed up to a project intended to bolster the semiconductor and microelectronics industries with combined research and education to ensure work for their students in high-tech industries.

The “Midwest Regional Network to Address National Needs in Semiconductor and Microelectronics” consists of a dozen institutions, made up of eight from Ohio, two from Michigan, and two from Indiana. Their stated aim is to support the onshoring efforts of the US semiconductor industry by addressing the need for research and a skilled workforce.

According to Wright State University, the network was formed in response to Intel’s announcement that it planned to build two chip factories near Columbus, Ohio, and followed a two-day workshop in April hosted by the state.

Those plans, revealed in January, are to build at least two semiconductor manufacturing plants on a 1,000-acre site, with the potential to expand to 2,000 acres and eight fabs.

At the time, Intel CEO Pat Gelsinger said he expected it to become the largest silicon manufacturing location on the planet. Construction started on the site at the beginning of July.

However, the university network was also formed to help address the broader national effort to regain American leadership in semiconductors and microelectronics, or at least bring some of it back onshore and make the US less reliant on supplies of chips manufactured abroad.

Apart from Wright State University, the 12 institutions involved in the network are: Columbus State Community College, Lorain County Community College, Michigan State University, Ohio State University, Purdue University, Sinclair Community College, University of Cincinnati, University of Dayton, University of Michigan, and the University of Notre Dame, Indiana.

The president of each institution has signed a memorandum of understanding to form the network, and the expectation is that the group will expand to include more than these dozen initial members.

The intention is that the institutions taking part will be able to make use of each other’s existing research, learning programs, capabilities, and expertise in order to boost their collective ability to support the semiconductor and microelectronics industry ecosystems.

Challenges for the network include developing mechanisms to connect existing research, and training assets across the region, and developing a common information sharing platform to make it easier to identify opportunities for joint programming and research across the network.

University of Cincinnati chief innovation officer David J Adams called the announcement a game-changer. “This highly innovative approach illustrates that we’re all in this together when it comes to meeting industry workforce and research needs,” Adams wrote in a posting on the University of Cincinnati website.

The move follows the long-awaited passage of the $280 billion CHIPS and Science Act at the end of last month, of which $52 billion of the total spend is expected to go towards subsidizing the building of semiconductor plants such as Intel’s, and boosting research and development of chip technology. ®

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!