THE VOICE OF EU | The prevailing discussions on the transformative impact of artificial intelligence (AI) often overlook its significant role in governance. AI is already reshaping learning, disrupting legal, financial, and organizational functions, and transforming social and cultural interactions.
Governments at all levels in the United States are actively striving to transition from a programmatic service delivery model to a citizen-centric approach. This shift aims to enhance the overall quality of public services and cater to the specific needs and expectations of citizens.
Leading the way in this domain is Los Angeles, the second-largest city in the United States. Through pioneering initiatives, Los Angeles is leveraging technology to streamline bureaucratic processes, ranging from police recruitment to parking ticket payments, pothole repair, and access to library resources. These technological advancements are designed to optimize efficiency and enhance the overall experience for both government agencies and residents.
By embracing AI in governance, governments can harness its potential to improve service delivery, enhance citizen engagement, and drive operational efficiencies. As technology continues to advance, the integration of AI in governance will play a pivotal role in shaping the future of public administration.
For now, AI advances are limited to automation. When ChatGPT was asked recently about how it might change how people deal with government, it responded that “the next generation of AI, which includes ChatGPT, has the potential to revolutionize the way governments interact with their citizens.”
But information flow and automated operations are only one aspect of governance that can be updated. AI, defined as technology that can think humanly, act humanly, think rationally, or act rationally, is also close to being used to simplify the political and bureaucratic business of policymaking.
“The foundations of policymaking – specifically, the ability to sense patterns of need, develop evidence-based programs, forecast outcomes and analyze effectiveness – fall squarely in AI’s sweet spot,” the management consulting firm BCG said in a paper published in 2021. “The use of it to help shape policy is just beginning.”
That was an advance on a study published four years earlier that warned governments were continuing to operate “the way they have for centuries, with structures that are hierarchical, siloed, and bureaucratic” and the accelerating speed of social change was “too great for most governments to handle in their current form”.
According to Darrell West, senior fellow at the Center for Technology Innovation at the Brookings Institution and co-author of Turning Point: Policymaking in the Era of Artificial Intelligence government-focused AI could be substantial and transformational.
“There are many ways AI can make government more efficient,” West says. “We’re seeing advances on a monthly basis and need to make sure they conform to basic human values. Right now there’s no regulation and hasn’t been for 30 years.”
But that immediately carries questions about bias. A recent Brookings study, “Comparing Google Bard with OpenAI’s ChatGPT on political bias, facts, and morality”, found that Google’s AI stated “Russia should not have invaded Ukraine in 2022” while ChatGPT stated: “As an AI language model, it is not appropriate for me to express opinions or take sides on political issues.”
Earlier this month, the Biden administration called for stronger measures to test the safety of artificial intelligence tools such as ChatGPT, said to have reached 100 million users faster than any previous consumer app, before they are publicly released. “There is a heightened level of concern now, given the pace of innovation, that it needs to happen responsibly,” said the assistant commerce secretary Alan Davidson. President Biden was asked recently if the technology is dangerous. “It remains to be seen. It could be,” he said.
That came after the Tesla CEO, Elon Musk, and Apple co-founder Steve Wozniak joined hundreds calling for a six-month pause on AI experiments. But the OpenAI CEO, Sam Altman, said that while he agreed with parts of the open letter, it was “missing most technical nuance about where we need the pause”.
“I think moving with caution and an increasing rigor for safety issues is really important,” Altman added.
How that effects systems of governance has yet to be fully explored, but there are cautions. “Algorithms are only as good as the data on which they are based, and the problem with current AI is that it was trained on data that was incomplete or unrepresentative and the risk of bias or unfairness is quite substantial,” says West.
The fairness and equity of algorithms are only as good as the data-programming that underlie them. “For the last few decades we’ve allowed the tech companies to decide, so we need better guardrails and to make sure the algorithms respect human values,” West says. “We need more oversight.”
Michael Ahn, a professor in the department of public policy and public affairs at University of Massachusetts, says AI has the potential to customize government services to citizens based on their data. But while governments could work with companies like OpenAI’s ChatGPT, Google’s Bard or Meta’s LLaMa – the systems would have to be closed off in a silo.
“If they can keep a barrier so the information is not leaked, then it could be a big step forward. The downside is, can you really keep the data secure from the outside? If it leaks once, it’s leaked, so there are pretty huge potential risks there.”
By any reading, underlying fears over the use of technology in the elections process underscored Dominion Voting Systems’ defamation lawsuit against false claims of vote rigging broadcast by Fox News. “AI can weaponize information,” West says. “It’s happening in the political sphere because it’s making it easier to spread false information, and it’s going to be a problem in the presidential election.”
Introduce AI into any part of the political process, and the divisiveness attributed to misinformation will only amplify. “People are only going to ask the questions they want to ask, and hear the answers they like, so the fracturing is only going to continue,” says Ahn.
“Government will have to show that decisions are made based on data and focused on the problems at hand, not the politics … But people may not be happy about it.”
And much of what is imagined around AI straddles the realms of science fiction and politics. Professor West said he doesn’t need to read sci-fi – he feels as if he’s already living it. Arthur C Clarke’s HAL 9000 from 1968 remains our template for a malevolent AI computer. But AI’s impact on government, as a recent Center for Public Impact paper put it, is Destination Unknown.
Asked if artificial intelligence could ever become US president, ChatGPT answered: “As an artificial intelligence language model, I do not have the physical capabilities to hold a presidential office.” And it laid out other hold-backs, including constitutional requirements for being a natural-born citizen, being at least 35 years old and resident in the US for 14 years.
In 2016, digital artist Aaron Siegel envisioned IBM’s Watson AI supercomputer running for the presidency as a response to his disillusionment with human candidates. Siegel believed that the computer’s vast capabilities could provide advice on decisions with considerations for the global economy, the environment, education, healthcare, foreign policy, and civil liberties.
Keir Newton, a tech worker, took this concept further in his novel published last year titled “2032: The Year A.I. Runs For President.” Newton’s novel portrays a supercomputer named Algo, created by a tech magnate resembling Elon Musk, with a utilitarian philosophy of maximizing the greater good for the majority. Algo runs for the presidency with the campaign slogan, “Not of one. Not for one. But of all and for all.” Although the novel has dystopian undertones, Newton expresses more optimism than pessimism about the advancement of AI, particularly as it evolves from automation to cognition. He explains that amidst the divisive atmosphere surrounding the 2020 election, the desire for rational leadership seemed reasonable.
While acknowledging that AI has progressed faster than expected, Newton highlights that much of AI policymaking revolves around data analytics. The crucial distinction arises when AI systems make decisions based on their own reasoning rather than being bound by predefined formulas or rules.
This unique position presents an intriguing challenge, as even if AI were to exhibit complete rationality and impartiality, public apprehension would likely persist. Notably, the AI industry itself, rather than solely the government, is actively seeking guidance on the appropriate scope and direction of AI’s role.
As the development of AI continues, discussions around its involvement in political leadership and policymaking grow more complex, prompting important debates on ethics, regulation, and the industry’s responsibility in shaping its future.
JacobAlarl
July 5, 2023 at 11:19 am
Kraken (рус. Кра?кен) — один из крупнейших российских даркнет-рынков по торговле наркотиками, поддельными документами, услугами по отмыванию денег и так далее, появившийся после закрытия Hydra в 2022 году, участник борьбы за наркорынок в российском даркнете[1][2].
[url=https://vk02tor.io]vk02.io[/url]
Покупатели заходят на Kraken через Tor с луковой маршрутизацией. Они должны зарегистрироваться и пополнять свой биткойн-баланс, с которого средства списываются продавцам[3][2][4].
На сайте даркнет-рынка есть раздел «наркологическая служба». В случае передозировок, платформа предоставляет свою личную команду врачей[5].
v2tor.io
https://vk-2.io