Connect with us

Technology

What’s artificial intelligence best at? Stealing human ideas | Technology

Voice Of EU

Published

on

Hello and welcome to the debut issue of TechScape, the Guardian’s newsletter on all things tech, and sometimes things not-tech if they’re interesting enough. I can’t tell you how excited I am to have you here with me, and I hope between us we can build not just a newsletter, but a news community.

Sign up to TechScape, Alex Hern’s weekly tech newsletter

Copilot

Sometimes there’s a story that just sums up all the hopes and fears of its entire field. Here’s one.

GitHub is a platform that lets developers collaborate on coding with colleagues, friends and strangers around the world, and host the results. Owned by Microsoft since 2018, the site is the largest host of source code in the world, and a crucial part of many companies’ digital infrastructure.

Late last month, GitHub launched a new AI tool, called Copilot. Here’s how chief executive Nat Friedman described it:

A new AI pair programmer that helps you write better code. It helps you quickly discover alternative ways to solve problems, write tests, and explore new APIs without having to tediously tailor a search for answers on the internet. As you type, it adapts to the way you write code – to help you complete your work faster.

In other words, Copilot will sit on your computer and do a chunk of your coding work for you. There’s a long-running joke in the coding community that a substantial portion of the actual work of programming is searching online for people who’ve solved the same problems as you, and copying their code into your program. Well, now there’s an AI that will do that part for you.

And the stunning thing about Copilot is that, for a whole host of common problems … it works. Programmers I have spoken to say it is as stunning as the first time text from GPT-3 began popping up on the web. You may remember that, it’s the superpowerful text-generation AI that writes paragraphs like:

The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.

Centaurs
It’s tempting, when imagining how tech will change the world, to think of the future as one where humans are basically unnecessary. As AI systems manage to tackle increasingly complex domains, with increasing competence, it’s easy enough to think of them as being able to achieve everything a person can, leaving the human that used to be employed doing the same thing with idle hands.

Whether that is a nightmare or a utopia, of course, depends on how you think society would adapt to such a change. Would huge numbers of people be freed to live a life of leisure, supported by the AIs that do their jobs in their stead? Or would they instead find themselves unemployed and unemployable, with their former managers reaping the rewards of the increased productivity an hour worked?

But it’s not always the case that AI is here to replace us. Instead, more and more fields are exploring the possibility of using the technology to work alongside people, extending their abilities, and taking the drudge work from their jobs while leaving them to handle the things that a human does best.

The concept’s come to be called a “centaur” – because it leads to a hybrid worker who has an AI back half and human front. It’s not as futuristic as it sounds: anyone who’s used autocorrect on an iPhone has, in effect, teamed up with an AI to offload the laborious task of typing correctly.

Often, centaurs can come close to the dystopian vision. Amazon’s warehouse employees, for instance, have been gradually pushed along a very similar path as the company seeks to eke out every efficiency improvement possible. The humans are guided, tracked and assessed throughout the working day, ensuring that they always take the optimal route through the warehouse, pick exactly the right items, and do so at a consistent rate high enough to let the company turn a healthy profit. They’re still employed to do things that only humans can offer – but in this case, that’s “working hands and a low maintenance bill”.

But in other fields, centaurs are already proving their worth. The world of competitive chess has, for years, had a special format for such hybrid players: humans working with the assistance of a chess computer. And, generally, the pairs play better than either would on their own: the computer avoids stupid errors, plays without getting tired, and presents a list of high-value options to the human player, who’s able to inject a dose of unpredictability and lateral thinking into the game.

That’s the future GitHub hopes Copilot will be able to introduce. Programmers who use it can stop worrying about simple, welldocumented tasks, like how to send a valid request to Twitter’s API, or how to pull the time in hours and minutes from a system clock, and start focusing their effort on the work that no one else has done.

But …
The reason why Copilot is fascinating to me isn’t just the positive potential, though. It’s also that, in one release, the company seems to have fallen into every single trap plaguing the broader AI sector.

Copilot was trained on public data from Github’s own platform. That means all of that source code, from hundreds of millions of developers around the world, was used to teach it how to write code based on user prompts.

That’s great if the problem is a simple programming task. It’s less good if the prompt for autocomplete is, say, secret credentials that you use to sign into user account. And yet:

GitHubCopilot gave me a [Airbnb] link with a key that still works (and stops working when changing it).

And:

The AI is leaking [sendgrid] API keys that are valid and still functional.

The vast majority of what we call AI today isn’t coded but trained: you give it a great pile of stuff, and tell it to work out for itself the relationships between that stuff. With the vast sum of code available in Github’s repository, there are plenty of examples for Copilot to learn what code that checks the time looks like. But there are also plenty of examples for Copilot to learn what an API key accidentally uploaded in public looks like – and to then share it onwards.

Passwords and keys are obviously the worst examples of this sort of leakage, but they point to the underlying concern about a lot of AI technology: is it actually creating things, or is it simply remixing work already done by other humans? And if the latter, should those humans get a say in how their work is used?

On that latter question, GitHub’s answer is a forceful no. “Training machine learning models on publicly available data is considered fair use across the machine learning community,” the company says in an FAQ.

Originally, the company made the much softer claim that doing so was merely “common practice”. But the page was updated after coders around the world complained that GitHub was violating their copyright. Intriguingly, the biggest opposition came not from private companies concerned that their work may have been reused, but from developers in the open-source community, who deliberately build in public to let their work be built upon in turn. Those developers often rely on copyright to ensure that people who use open-source code have to publish what they create – something GitHub didn’t do.

GitHub is probably right on the law, according to legal professor James Grimmelmann. But the company isn’t going to be the last to reveal a groundbreaking new AI tool and then face awkward questions over whether it actually has the rights to the data used to train it.

If you want to read more please subscribe to receive TechScape in your inbox every Wednesday.



Source link

Technology

‘I hope the world will be safer’, says Molly Russell’s father after inquest – video | Technology

Voice Of EU

Published

on

Molly Russell’s father has accused the world’s biggest social media firms of ‘monetising misery’ after an inquest ruled that harmful online content contributed to the 14-year-old’s death.

Ian Russell accused Meta, the owner of Facebook and Instagram, of guiding his daughter on a ‘demented trail of life-sucking content’, after the landmark ruling raised the regulatory pressure on social media companies.

The inquest heard on Friday that Molly, from Harrow, north-west London, had viewed large amounts of content related to suicide, depression, self-harm and anxiety on Instagram and Pinterest before she died in November 2017

Source link

Continue Reading

Technology

Google delays execution of deprecated Chrome extensions • The Register

Voice Of EU

Published

on

Google has delayed its browser extension platform transition for enterprise customers, giving those using managed versions of Chrome with the deprecated Manifest v2 (MV2) extensions an extra six months of support.

The Chocolate Factory has also redefined its deadlines for general Chrome users to make the transition to the new platform, called Manifest v3 (MV3), less of a shock to the system.

“Chrome will take a gradual and experimental approach to turning off Manifest V2 to ensure a smooth end-user experience during the phase-out process,” explained David Li, a product manager at Google, in a blog post. “We would like to make sure developers have the information they need, with plenty of time to transition to the new manifest version and to roll out changes to their users.”

Chrome will take a gradual and experimental approach to turning off Manifest V2 to ensure a smooth end-user experience

Developers, in other words, need more time to rewrite their extension code.

Previously, as of January 2023, Chrome was to stop running MV2 extensions. Enterprise managed Chrome installations had an extra six months with MV2, until June 2023.

The current schedule says MV2 extensions may or may not work in developer-oriented versions of Chrome used outside of enterprises. “Starting in Chrome 112, Chrome may run experiments to turn off support for Manifest V2 extensions in Canary, Dev, and Beta channels,” the timeline says.

And then in June 2023, MV2 extensions may or may not get disabled in any version of Chrome, including the Stable channel used by most people.

New MV2 extensions could no longer be added to the Chrome Web Store in June 2022, and that remains unchanged under the new roadmap; MV2 extensions already available the Chrome Web Store can still be downloaded and can still receive updates.

As of June 2023, MV2 extensions will no longer be visible in the store (so they can’t be newly installed, but can still be updated for existing users).

Come January 2024, nothing will be left to chance: the Chrome Web Store will stop accepting updates to MV2 extensions, all MV2 extensions will be removed from the store, and the MV2 usage in enterprises will end.

Li suggests developers make the transition sooner rather than later “because those [MV2] extensions may stop working at any time following the aforementioned dates.”

In recognition of the confusion among developers trying to adapt their extensions to MV3, Li said Google has implemented new APIs and platform improvements and has created a progress page to provide more transparency with regard to the state of MV2-MV3 transition.

Since 2018, Google has been revising the code that defines what browser extensions can do in Chrome. Its outgoing architecture known as Manifest v2 proved too powerful – it could be used by rogue add-ons to steal data, for example – and Google claimed use of those capabilities hindered browser performance. Critics like the EFF have disputed that.

Coincidentally, those capabilities, particularly the ability to intercept and revise network requests based on dynamic criteria, made Manifest v2 useful for blocking content and privacy-violating tracking scripts.

Under the new Manifest v3 regime, extensions have been domesticated. As a result, they appear to use computing resources more efficiently while being less effective at content blocking.

Illustration of the Facebook logo surrounded by thumbs down

Facebook is one bad Chrome extension away from another Cambridge Analytica scandal

READ MORE

Whether or not this results in meaningful performance improvement, the MV3 change has been championed by Google for Chrome and the open source Chromium project, and is being supported by those building atop Chromium, like Microsoft Edge, as well as Apple’s WebKit-based Safari and Mozilla’s Gecko-based Firefox.

However, Brave, Mozilla, and Vivadi have said they intend to continue supporting Manifest v2 extensions for an indeterminate amount of time. How long that will last is anyone’s guess.

Brave, like other privacy-oriented companies and advocacy groups, has made it clear this regime change is not to its liking. “With Manifest V3, Google is harming privacy and limiting user choice,” the developer said via Twitter. “The bottom line, though, is that Brave will still continue to offer leading protection against invasive ads and trackers.”

With Manifest V3, Google is harming privacy and limiting user choice

Google, on its timeline, suggests MV3 is approaching “full feature parity with Manifest V2.”

Extension developers appear to be skeptical about that. On Friday, in response to Google’s timeline revision posted to the Chromium Extension Google Group, a developer forum member who goes by the pseudonym “wOxxOm” slammed Google for posts full of corporate lingo about safety and security and pushed back against its statement about feature parity.

“[T]his definitely sounds reasonable if you don’t know the context, but given the subsequently plotted timeline it becomes a gross exaggeration and a borderline lie, because with the progress rate we all observed over the past years it’ll take at least several years more for MV3 to become reliable and feature-rich enough to replace MV2, not half a year or a year,” wOxxOm posted.

“Neither the issue list nor the announcement acknowledge that MV3 is still half-broken and unusable for anything other than a beta test due to its unreliable registration of service workers that break extensions completely for thousands of users, soon for millions because no one in Chromium has yet found out the exact reason of the bug, hence they can’t be sure they’ll fix it in the next months.”

This may not be the last time Google revises its transition timeline. ®



Source link

Continue Reading

Technology

Irish Research Council pumps €27m to fund next generation of researchers

Voice Of EU

Published

on

A total of 316 awardees of the IRC’s Government of Ireland programme will receive funding to conduct ‘pioneering’ research.

Postgraduate and postdoctoral researchers in Ireland are set to get €27m in funding from the Irish Research Council (IRC) through its flagship Government of Ireland programme.

In an announcement today (30 September), the IRC said that a total of 316 Government of Ireland awards will be given to researchers in the country, including 239 postgraduate scholarships and 77 postdoctoral fellowships.

Awardees under the scheme will conduct research on a broad range of topics, from machine translation and social media to protecting wild bee populations and bioplastics.

“The prestigious awards recognise and fund pioneering research projects along with addressing new and emerging fields of research that introduce creative and innovative approaches across all disciplines, including the sciences, humanities and the arts,” said IRC director Louise Callinan.

Awardees

One of the science-focused postgraduate awardees, University of Galway’s Cherrelle Johnson, is working on the long-term sustainability of bioplastics as an alternative to fossil fuel-based plastics.

Another, Royal College of Surgeons in Ireland’s Tammy Strickland, is studying the role of the circadian rhythm, or the sleep-wake cycle, of immune cells in the brain in epilepsy.

Khetam Al Sharou of Dublin City University, one of the postdoctoral researchers to win the award, is looking into the use of machine translation in social media and the associated risks of information distortion.

Meanwhile, Robert Brose from the Dublin Institute for Advanced Studies is investigating the particles and radiation that are emitted by high-energy sources in our milky way to try and find the most likely sources of life.

Diana Carolina Pimentel Betancurt from Teagasc, the state agency providing research and development in agriculture and related fields, is looking for natural probiotics in native honeybees to mitigate the effect of pesticides.

“Funding schemes like the IRC’s Government of Ireland programmes are vitally important to the wider research landscape in Ireland, as they ensure that researchers are supported at an early stage of their career and are given an opportunity to direct their own research,” Callinan said.

53 early-career researchers across Ireland got €28.5m in funding last month from the SFI-IRC Pathway programme, a new collaborative initiative between Science Foundation Ireland and the IRC. SFI and IRC are expected to merge to form one funding body in the coming years.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!