Connect with us

Technology

Power-hungry robots, space colonization, cyborgs: inside the bizarre world of ‘longtermism’ | Technology

Avatar

Published

on

Most of us don’t think of power-hungry killer robots as an imminent threat to humanity, especially when poverty and the climate crisis are already ravaging the Earth.

This wasn’t the case for Sam Bankman-Fried and his followers, powerful actors who have embraced a school of thought within the effective altruism movement called “longtermism”.

In February, the Future Fund, a philanthropic organization endowed by the now-disgraced cryptocurrency entrepreneur, announced that it would be disbursing more than $100m – and possibly up to $1bn – this year on projects to “improve humanity’s long-term prospects”.

The slightly cryptic reference might have been a bit puzzling to those who think of philanthropy as funding homelessness charities and medical NGOs in the developing world. In fact, the Future Fund’s particular areas of interest include artificial intelligence, biological weapons and “space governance”, a mysterious term referring to settling humans in space as a potential “watershed moment in human history”.

Out-of-control artificial intelligence was another area of concern for Bankman-Fried – so much so that in September the Future Fund announced prizes of up to $1.5m to anyone who could make a persuasive estimate of the threat that unrestrained AI might pose to humanity.

SpaceX’s Elon Musk gives an update on the company’s Mars rocket Starship. Musk is a proponent of longtermism
SpaceX’s Elon Musk gives an update on the company’s Mars rocket Starship. Musk is a proponent of longtermism. Photograph: Callaghan O’Hare/Reuters

“We think artificial intelligence” is “the development most likely to dramatically alter the trajectory of humanity this century”, the Future Fund said. “With the help of advanced AI, we could make enormous progress toward ending global poverty, animal suffering, early death and debilitating disease.” But AI could also “acquire undesirable objectives and pursue power in unintended ways, causing humans to lose all or most of their influence over the future”.

Less than two months after the contest was announced, Bankman-Fried’s $32bn cryptocurrency empire had collapsed, much of the Future Fund’s senior leadership had resigned and its AI prizes may never be rewarded.

Nor will most of the millions of dollars that Bankman-Fried had promised a constellation of charities and thinktanks affiliated with effective altruism, a once-obscure ethical movement that has become influential in Silicon Valley and the highest echelons of the international business and political worlds.


Longtermists argue that the welfare of future humans is as morally important – or more important – than the lives of current ones, and that philanthropic resources should be allocated to predicting, and defending against, extinction-level threats to humanity.

But rather than giving out malaria nets or digging wells, longtermists prefer to allocate money to researching existential risk, or “x-risk”.

In his recent book What We Owe the Future, William MacAskill – a 35-year-old moral philosopher at Oxford who has become the public intellectual face of effective altruism – makes a case for longtermism with a thought experiment about a hiker who accidentally shatters a glass bottle on a trail. A conscientious person, he holds, would immediately clean up the glass to avoid injuring the next hiker – whether that person comes in a week or in a century.

Similarly, MacAskill argues that the number of potential future humans, over many generations for the duration of the species, far outnumbers the number currently alive; if we truly believe that all humans are equal, protecting future humans is more important than protecting human lives today.

Some of longtermists’ funding interests, such as nuclear nonproliferation and vaccine development, are fairly uncontroversial. Others are more outlandish: investing in space colonization, preventing the rise of power-hungry AI, cheating death through “life-extension” technology. A bundle of ideas known as “transhumanism” seeks to upgrade humanity by creating digital versions of humans, “bioengineering” human-machine cyborgs and the like.

People like the futurist Ray Kurzweil and his adherents believe that biotechnology will soon “enable a union between humans and genuinely intelligent computers and AI systems”, Robin McKie explained in the Guardian in 2018. “The resulting human-machine mind will become free to roam a universe of its own creation, uploading itself at will onto a ‘suitably powerful computational substrate’,” and thereby creating a kind of immortality.


This feverish techno-utopianism distracts funders from pressing problems that already exist here on Earth, said Luke Kemp, a research associate at the University of Cambridge’s Centre for the Study of Existential Risk who describes himself as an “EA-adjacent” critic of effective altruism. Left on the table, he says, are critical and credible threats that are happening right now, such as the climate crisis, natural pandemics and economic inequality.

“The things they push tend to be things that Silicon Valley likes,” Kemp said. They’re the kinds of speculative, futurist ideas that tech billionaires find intellectually exciting. “And they almost always focus on technological fixes” to human problems “rather than political or social ones”.

There are other objections. For one thing, lavishly expensive, experimental bioengineering would be accessible, especially initially, to “only a tiny sliver of humanity”, Kemp said; it could bring about a future caste system in which inequality is not only economic, but biological.

This thinking is also dangerously undemocratic, he argued. “These big decisions about the future of humanity should be decided by humanity. Not by just a couple of white male philosophers at Oxford funded by billionaires. It is literally the most powerful, and least representative, strata of society imposing a particular vision of the future which suits them.”

Some adherents of longtermism are interested in ‘transhumanism’, the idea that technology can extend our longevity.
Some adherents of longtermism are interested in ‘transhumanism’, the idea that technology can extend our longevity. Composite: Lynsey Irvine/Getty

Kemp added: “I don’t think EAs – or at least the EA leadership – care very much about democracy.” In its more dogmatic varieties, he said, longtermism is preoccupied with “rationality, hardcore utilitarianism, a pathological obsession with quantification and neoliberal economics”.

Organizations such as 80,000 Hours, a program for early-career professionals, tend to encourage would-be effective altruists into four main areas, Kemp said: AI research, research preparing for human-made pandemics, EA community-building and “global priorities research”, meaning the question of how funding should be allocated.

The first two areas, though worthy of study, are “highly speculative”, Kemp said, and the second two are “self-serving”, since they channel money and energy back into the movement.

This year, the Future Fund reports having recommended grants to worthy-seeming projects as various as research on “the feasibility of inactivating viruses via electromagnetic radiation” ($140,000); a project connecting children in India with online science, technology, engineering and mathematics education ($200,000); research on “disease-neutralizing therapeutic antibodies” ($1.55m); and research on childhood lead exposure ($400,000).

But much of the Future Fund’s largesse seems to have been invested in longtermism itself. It recommended $1.2m to the Global Priorities Institute; $3.9m to the Long Term Future Fund; $2.9m to create a “longtermist coworking office in London”; $3.9m to create a “longtermist coworking space in Berkeley”; $700,000 to the Legal Priorities Project, a “longtermist legal research and field-building organization”; $13.9m to the Centre for Effective Altruism; and $15m to Longview Philanthropy to execute “independent grantmaking on global priorities research, nuclear weapons policy, and other longtermist issues.”

Kemp argued that effective altruism and longtermism often seem to be working toward a kind of regulatory capture. “The long-term strategy is getting EAs and EA ideas into places like the Pentagon, the White House, the British government and the UN” to influence public policy, he said.

Sam Bankman-Fried at a Senate agriculture, nutrition and forestry committee hearing in Washington DC.
Sam Bankman-Fried at a Senate agriculture, nutrition and forestry committee hearing in Washington DC. Photograph: Bloomberg/Getty Images

There may be a silver lining in the timing of Bankman-Fried’s downfall. “In a way, it’s good that it happened now rather than later,” Kemp said. “He was planning on spending huge amounts of money on elections. At one stage, he said he was planning to spend up to a billion dollars, which would have made him the biggest donor in US political history. Can you imagine if that amount of money contributed to a Democratic victory – and then turned out to have been based on fraud? In an already fragile and polarized society like the US? That would have been horrendous.”


“The main tension to the movement, as I see it, is one that many movements deal with,” said Benjamin Soskis, a historian of philanthropy and a senior research associate at the Urban Institute. “A movement that was primarily fueled by regular people – and their passions, and interests, and different kinds of provenance – attracted a number of very wealthy funders,” and came to be driven by “the funding decisions, and sometimes just the public identities, of people like SBF and Elon Musk and a few others”. (Soskis noted that he has received funding from Open Philanthropy, an EA-affiliated foundation.)

Effective altruism put Bankman-Fried, who lived in a luxury compound in the Bahamas, “on a pedestal, as this Corolla-driving, beanbag-sleeping, earning-to-give monk, which was clearly false”, Kemp said.

Soskis thinks that effective altruism has a natural appeal to people in tech and finance – who tend to have an analytical and calculating way of thinking about problems – and EA, like all movements, spreads through social and work networks.

Effective altruism is also attractive to wealthy people, Soskis believes, because it offers “a way to understand the marginal value of additional dollars”, particularly when talking of “vast sums that can defy comprehension”. The movement’s focus on numbers (“shut up and multiply”) helps hyper-wealthy people understand more concretely what $500m can do philanthropically versus, say, $500,000 or $50,000.

One positive outcome, he thinks, is that EA-influenced donors publicly discuss their philanthropic commitments and encourage others to make them. Historically, Americans have tended to regard philanthropy as a private matter.

But there’s something “which I think you can’t escape”, Soskis said. Effective altruism “isn’t premised on a strong critique of the way that money has been made. And elements of it were construed as understanding capitalism more generally as a positive force, and through a kind of consequentialist calculus. To some extent, it’s a safer landing spot for folks who want to sequester their philanthropic decisions from a broader political debate about the legitimacy of certain industries or ways of making money.”

Kemp said that it is rare to hear EAs, especially longtermists, discuss issues such as democracy and inequality. “Honestly, I think that’s because it is something the donors don’t want us talking about.” Cracking down on tax avoidance, for example, would lead to major donors “losing both power and wealth”.

The downfall of Bankman-Fried’s crypto empire, which has jeopardized the Future Fund and countless other longtermist organizations, may be revealing. Longtermists believe that future existential risks to humanity can be accurately calculated – yet, as the economist Tyler Cowen recently pointed out, they couldn’t even predict the existential threat to their own flagship philanthropic organization.

There must be “soul-searching”, Soskis said. “Longtermism has a stain on it and I’m not sure when or if it will be fully removed.”

“A billionaire is a billionaire,” the journalist Anand Giridharadas wrote recently on Twitter. His 2018 book Winners Take All sharply criticized the idea that private philanthropy will solve human problems. “Stop believing in good billionaires. Start organizing toward a good society.”



Source link

Technology

Graphcore launches C600 card for China amid financial woes • The Register

Avatar

Published

on

British AI chip designer Graphcore has wrapped its two-year-old, second-generation Intelligence Processing Unit for China and Singapore amid recently reported financial woes.

The Bristol, UK-based startup announced on Tuesday that its Colossus Mk2 GC200 IPU will be available in the new C600 PCIe card, making the processor compatible with servers beyond the company’s pre-integrated M2000 IPU system.

The company said pre-orders are now open for the C600 card in China and Singapore, and it will be available through approved hardware partners in Graphcore-qualified systems. It didn’t say whether the card will expand to other markets.

The C600 card was designed “in response to customer demand in markets where datacenter configurations, including rack size and power delivery, vary widely,” said Chen Jin, Graphcore’s vice president and head of China engineer, in a blog post.

“This highly versatile form-factor enables Graphcore customers to tailor their system setup, including host server / chassis, to their exact requirements,” Jin added.

It’s not clear if Graphcore had to tune the C600 card to abide with the recent US export restrictions for advanced chips to China. While Graphcore is a British company, the export bans have extended to semiconductor companies far beyond American borders because the restrictions cover US manufacturing and design tools used to make most of the world’s advanced chips.

The US restrictions have prompted Graphcore’s much larger rivals to switch gears, with AMD halting sales of its MI250 GPU to China and Nvidia slowing down its A100 GPU to continue sales in the country. Biren Technology and Alibaba in China have also reportedly had to step down processing speeds for new GPUs.

Tech specs suggest it’s good enough

Graphcore’s C600 card is designed for AI inference workloads at low-precision number formats, capable of hitting up to 280 teraflops of 16-bit floating point (FP16) compute and delivering as much as 560 teraflops of 8-bit floating point (FP8) math.  

The FP8 support is new for Graphcore, as it is for the rest of the industry. Intel, Arm, and Nvidia published the specification for FP8 in September. The goal of FP8 is to create a lower precision format for neural network training and inference that optimizes memory usage and improves efficiency while providing a similar level of accuracy to 16-bit precisions.

The C600 is a PCIe Gen 4, dual-slot card with a thermal design power of 185 watts. Up to eight of the cards can fit into a single server chassis, and they communicate directly using Graphcore’s IPU-Link high-bandwidth interconnect cables. The C600’s IPU-Link bandwidth is 256GB/s [PDF].

The Mk2 IPU inside the C600 card has the same 1,472 IPU cores and 900MB of in-processor memory when the second-generation IPU was first announced in 2020.

The C600 release comes not long after multiple reports have painted a gloomy picture for Graphcore. In September, the startup said it was planning job cuts due to an “extremely challenging” macroeconomic situation. The next month, The Times reported that investors had slashed Graphcore’s valuation by $1 billion in the face of financial woes, including a terminated deal with Microsoft. ®

Source link

Continue Reading

Technology

200 Irish businesses are getting the chance to test-drive electric vehicles

Avatar

Published

on

The Government is looking to boost the electrification of commercial fleets as part of plans to have nearly 1m EVs on Irish roads by 2030.

As part of plans to drive down emissions in Ireland, a new initiative will let businesses test out electric vehicles for free.

Fully electric cars and vans will be loaned to 200 Irish business free of charge for three months under the Government’s Commercial Fleet Trial.

The aim is to encourage businesses to make the switch to an electric vehicle and contribute to the targets of the Climate Action Plan.

Ireland is aiming to reach a 51pc reduction in emissions by 2030, setting the country on a path to net-zero emissions no later than 2050. One element of this plan is to have 945,000 electric vehicles on Irish roads by the end of this decade.

Minister for Transport Eamon Ryan, TD, said an “important component” in achieving this target is the electrification of commercial fleets.

“Businesses up and down the country are already telling us that they are keen to make the switch to more sustainable practices, but they also need to know that the switches they want to make are going to be good for their bottom line,” he added.

“The findings from this trial will give us real-world feedback and provide us with the evidence to encourage even more businesses to switch to electric.”

The trial will involve 50 fully electric vehicles – 30 passenger cars and 20 vans – while giving businesses the option to install an EV charger.

By the end of this month, 14 businesses across Dublin, Sligo, Limerick, Louth, Wexford, Cork, Waterford and Galway will have received cars to test out.

The trial will be coordinated by the Sustainable Energy Authority of Ireland and Zero Emissions Vehicles Ireland – a new office of the Department of Transport that is tasked with supporting the switch to electric vehicles.

While the number of electric cars in Ireland is on the rise, there have been concerns about meeting the ambitious 2030 EV goal.

Ryan said this week that the Government is “on track” to deliver the 945,000 EVs target, and that it will launch a new €100m strategy next month to boost the number of charging stations installed around the country.

A study last year found that Ireland lags behind other European nations when it comes to EV charging infrastructure, which could hamper the roll-out of these vehicles.

However, the Government has been making moves to change this. It recently announced a new suite of grants and initiatives to boost Ireland’s transition to electric vehicles, and a €15m all-island investment to set up 90 rapid EV charging points across Ireland.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Technology

Changes to online safety bill tread line between safety and appearing ‘woke’ | Internet safety

Avatar

Published

on

The online safety bill is returning to parliament under the aegis of its fourth prime minister and seventh secretary of state since it was first proposed as an online harms white paper under Theresa May.

Each of those has been determined to leave their fingerprints on the legislation, which has swollen to encompass everything from age verification on pornography to criminalisation of posting falsehoods online, and Rishi Sunak and the digital and culture secretary, Michelle Donelan, are no different.

Some of the changes to the bill, which was unceremoniously pulled from the agenda in early summer as the government cleared parliamentary time to launch its own confidence motion backing Boris Johnson, are simple additions. After the law commission recommended updating legislation covering nonconsensual intimate images, the Department for Digital, Culture, Media and Sport folded the changes into the bumper bill, announcing plans to criminalise “downblousing” and the creation of pornographic “deepfakes” without the subject’s consent.

But others reflect the contentious nature of the legislation, which faces a balancing act between the government’s desire to make the UK “the safest place to be online”, and its fear of appearing overly censorious or, worse still, “woke”.

On Tuesday, Donelan triumphantly announced that the latest version of the online safety bill would be dropping efforts to regulate content deemed “legal but harmful”. Earlier drafts of the bill had hit upon a canny way to please both sides of the debate: rather than requiring social media companies to remove certain types of content outright, the bill simply requires them to declare a position on that material in their terms of service, and then enforce that position. Theoretically, a social media company could explicitly declare itself content with allowing harmful content on its platform, and receive no penalties for doing so.

But free speech groups, in and out of parliament, worried that the requirement would have a chilling effect, and social networks backed them up: few deliberately want to have harmful content on their platforms, but faced with a legal requirement to take action on it or face penalties, they could end up being forced to over-correct. For topics such as suicide or self-harm, aggressive over-moderation can cause real world harm just like lax policies can.

The push against those regulations reached its height during the Tory leadership contest, when the online safety bill was caricatured by its opponents, such as trade secretary Kemi Badenoch, as legislating for hurt feelings. And so upon its reintroduction, the “legal but harmful” provisions were stripped out, at least for content aimed at adults. And then the government went further: in an effort to burnish its free speech credentials, it added in new legal requirements forcing not over-moderation but under-moderation.

“Companies will not be able to remove or restrict legal content, or suspend or ban a user, unless the circumstances for doing this are clearly set out in their terms of service or are against the law,” DCMS announced. The rules, described as a “consumer friendly ‘triple shield’”, could prevent companies from acting rapidly to ensure the health of their platform, and leave them facing a legal risk if they take down content that they, and other users, would rather see removed.

Some of the changes to the bill are deep and technical. But others seem to be simple headline-chasing. The government has dropped the offence of “harmful communications” from the bill, after it became a lightning-rod for criticism with Badenoch and others arguing that it was “legislating for hurt feelings”.

But in order to remove the harmful communications offence, the government has also cancelled plans to strike off the two offences it was due to replace: parts of the Malicious Communications Act and the Communications Act 2003 which are far broader than the ban on harmful communications was to be. The harmful communications offence required a message cause “serious distress”; the Malicious Communications Act requires only “distress”, while the Communications Act 2003 is even softer, banning messages sent “for the purpose of causing annoyance, inconvenience or needless anxiety”. Those offences will now remain on the books indefinitely.

But becoming part of the psychodrama of the Conservative party is the only way legislative scrutiny can occur in this parliament. The rest of this monster bill, stretching over hundreds of pages and redefining the landscape of internet regulation for a generation, has barely been discussed in public at all. Proposals ranging from an attack on end to end encryption to the christening of a first-of-its-kind internet regulator in the shape of Ofcom are being treated as technocratic tweaks, but if they were given the time they deserved, it would be likely the legislative process would outlast a fifth prime minister as well.

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!