Hello and welcome to the debut issue of TechScape, the Guardian’s newsletter on all things tech, and sometimes things not-tech if they’re interesting enough. I can’t tell you how excited I am to have you here with me, and I hope between us we can build not just a newsletter, but a news community.
Sometimes there’s a story that just sums up all the hopes and fears of its entire field. Here’s one.
GitHub is a platform that lets developers collaborate on coding with colleagues, friends and strangers around the world, and host the results. Owned by Microsoft since 2018, the site is the largest host of source code in the world, and a crucial part of many companies’ digital infrastructure.
Late last month, GitHub launched a new AI tool, called Copilot. Here’s how chief executive Nat Friedman described it:
A new AI pair programmer that helps you write better code. It helps you quickly discover alternative ways to solve problems, write tests, and explore new APIs without having to tediously tailor a search for answers on the internet. As you type, it adapts to the way you write code – to help you complete your work faster.
In other words, Copilot will sit on your computer and do a chunk of your coding work for you. There’s a long-running joke in the coding community that a substantial portion of the actual work of programming is searching online for people who’ve solved the same problems as you, and copying their code into your program. Well, now there’s an AI that will do that part for you.
And the stunning thing about Copilot is that, for a whole host of common problems … it works. Programmers I have spoken to say it is as stunning as the first time text from GPT-3 began popping up on the web. You may remember that, it’s the superpowerful text-generation AI that writes paragraphs like:
The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.
Centaurs It’s tempting, when imagining how tech will change the world, to think of the future as one where humans are basically unnecessary. As AI systems manage to tackle increasingly complex domains, with increasing competence, it’s easy enough to think of them as being able to achieve everything a person can, leaving the human that used to be employed doing the same thing with idle hands.
Whether that is a nightmare or a utopia, of course, depends on how you think society would adapt to such a change. Would huge numbers of people be freed to live a life of leisure, supported by the AIs that do their jobs in their stead? Or would they instead find themselves unemployed and unemployable, with their former managers reaping the rewards of the increased productivity an hour worked?
But it’s not always the case that AI is here to replace us. Instead, more and more fields are exploring the possibility of using the technology to work alongside people, extending their abilities, and taking the drudge work from their jobs while leaving them to handle the things that a human does best.
The concept’s come to be called a “centaur” – because it leads to a hybrid worker who has an AI back half and human front. It’s not as futuristic as it sounds: anyone who’s used autocorrect on an iPhone has, in effect, teamed up with an AI to offload the laborious task of typing correctly.
Often, centaurs can come close to the dystopian vision. Amazon’s warehouse employees, for instance, have been gradually pushed along a very similar path as the company seeks to eke out every efficiency improvement possible. The humans are guided, tracked and assessed throughout the working day, ensuring that they always take the optimal route through the warehouse, pick exactly the right items, and do so at a consistent rate high enough to let the company turn a healthy profit. They’re still employed to do things that only humans can offer – but in this case, that’s “working hands and a low maintenance bill”.
But in other fields, centaurs are already proving their worth. The world of competitive chess has, for years, had a special format for such hybrid players: humans working with the assistance of a chess computer. And, generally, the pairs play better than either would on their own: the computer avoids stupid errors, plays without getting tired, and presents a list of high-value options to the human player, who’s able to inject a dose of unpredictability and lateral thinking into the game.
That’s the future GitHub hopes Copilot will be able to introduce. Programmers who use it can stop worrying about simple, welldocumented tasks, like how to send a valid request to Twitter’s API, or how to pull the time in hours and minutes from a system clock, and start focusing their effort on the work that no one else has done.
But … The reason why Copilot is fascinating to me isn’t just the positive potential, though. It’s also that, in one release, the company seems to have fallen into every single trap plaguing the broader AI sector.
Copilot was trained on public data from Github’s own platform. That means all of that source code, from hundreds of millions of developers around the world, was used to teach it how to write code based on user prompts.
That’s great if the problem is a simple programming task. It’s less good if the prompt for autocomplete is, say, secret credentials that you use to sign into user account. And yet:
GitHubCopilot gave me a [Airbnb] link with a key that still works (and stops working when changing it).
The AI is leaking [sendgrid] API keys that are valid and still functional.
The vast majority of what we call AI today isn’t coded but trained: you give it a great pile of stuff, and tell it to work out for itself the relationships between that stuff. With the vast sum of code available in Github’s repository, there are plenty of examples for Copilot to learn what code that checks the time looks like. But there are also plenty of examples for Copilot to learn what an API key accidentally uploaded in public looks like – and to then share it onwards.
Passwords and keys are obviously the worst examples of this sort of leakage, but they point to the underlying concern about a lot of AI technology: is it actually creating things, or is it simply remixing work already done by other humans? And if the latter, should those humans get a say in how their work is used?
On that latter question, GitHub’s answer is a forceful no. “Training machine learning models on publicly available data is considered fair use across the machine learning community,” the company says in an FAQ.
Originally, the company made the much softer claim that doing so was merely “common practice”. But the page was updated after coders around the world complained that GitHub was violating their copyright. Intriguingly, the biggest opposition came not from private companies concerned that their work may have been reused, but from developers in the open-source community, who deliberately build in public to let their work be built upon in turn. Those developers often rely on copyright to ensure that people who use open-source code have to publish what they create – something GitHub didn’t do.
GitHub is probably right on the law, according to legal professor James Grimmelmann. But the company isn’t going to be the last to reveal a groundbreaking new AI tool and then face awkward questions over whether it actually has the rights to the data used to train it.
If you want to read more please subscribe to receive TechScape in your inbox every Wednesday.
Facebook’s semi-independent oversight board says it will review the company’s “XCheck” system, an internal program that has exempted high-profile users from some or all of its rules.
The decision follows an investigation by the Wall Street Journal that revealed that reviews of posts by well-known users such as celebrities, politicians and journalists are steered into the separate system.
Under the program, some users are “whitelisted”, or not subject to enforcement action, while others are allowed to post material that violates Facebook rules pending content reviews that often do not take place. The Xcheck system, for example, allowed Brazilian footballer Neymar to post nude pictures of a woman who had accused him of rape, according to the report.
Users were identified for additional scrutiny based on criteria such as being “newsworthy”, “influential or popular” or “PR risky”, the Wall Street Journal found. By 2020 there were 5.8 million users on the XCheck list, according to the newspaper.
The oversight board said Tuesday that it expects to have a briefing with Facebook on the system and “will be reporting what we hear from this” as part of a report it will publish in October.
The board may also make other recommendations, although Facebook is not bound to follow these.
The Journal’s report, the board said, has drawn “renewed attention to the seemingly inconsistent way that the company makes decisions, and why greater transparency and independent oversight of Facebook matters so much for users”.
Facebook told the Journal in response to its investigation that the system “was designed for an important reason: to create an additional step so we can accurately enforce policies on content that could require more understanding”. The company added that criticism of it was “fair” and that it was working to fix it.
A representative for Facebook declined to comment to the Associated Press on the oversight board’s decision.
The Philippines has become the latest nation to impose a digital services tax.
Such taxes require the likes of Netflix and Spotify to pay local sales taxes even though their services are delivered – legally, notionally, and physically – from beyond local jurisdiction.
The Philippines has chosen a rate of 12 per cent, mirroring local value added taxes.
“We have now clarified that digital services and the goods and services traded through digital service providers should generally be subject to VAT. This is just a matter of common tax sense,” said Joey Salceda, a member of the Philippines’ House of Representatives and a backer of the change to the nation’s tax code.
Salceda tied the change to post-pandemic economic recovery.
“If brick and mortar establishments, which are the hardest-hit by the pandemic, have to pay VAT, the giants of e-commerce shouldn’t be exempt,” he said.
However, local companies that are already exempt from VAT by virtue of low turnover won’t be caught by the extension of the tax into the virtual realm.
Salceda’s amendments are designed to catch content streamers, but also online software sales – including mobile apps – plus SaaS and hosted software. The Philippines’ News Agency’s report on the amendment’s passage into law even mentions firewalls as subject to VAT.
But the taxes are controversial because they are seen as a unilateral response to the wider issue of multinational companies picking the jurisdictions in which they’ll pay tax – a practice that erodes national tax bases. The G7 group of nations, and the OECD, think that collaborations that shift tax liabilities to nations where goods and services are acquired and consumed are the most appropriate response, and that harmonising global tax laws to make big tech pay up wherever they do business is a better plan than digital services taxes.
While returning to the office is now possible for many, some workers might still want the option of flexible working some of the time. Here’s how to broach the subject.
This week marked the beginning of a phased and staggered return to workplaces for many employees in Ireland.
It essentially marked the first official green light for employers to ready their offices and start putting plans in place for their staff’s return.
However, HR body CIPD Ireland urged employers to be mindful of anxious workers as they face “another round of upheaval” with the return to offices.
So, while employers are finalising plans about how, where and when their teams will work, some employees may be wondering how to go about expressing their preference, worried that it’s not in line with what the company wants.
While there have been plenty of discussions and remote work advocates calling for leaders to be more flexible and recognise that the future of work will be hybrid, the reality for individual employees can feel very different.
While big-picture debates around the right to request remote work are happening, how do you ask for what you want in the here and now, when your boss is determined to have a full return to the office?
Explain your reasons
If remote or flexible working isn’t something your boss is already willing to give you, then you must treat it like a pay rise request.
Explain clearly and concisely the reasons why you want more flexibility, how it will benefit you and make you a more engaged, happier worker.
While family commitments might be an important factor, so too is work-life balance and getting rid of long commutes. And, while there is light at the end of the pandemic tunnel, Covid-19 is still a very real concern, so don’t be afraid to express your reservations about this too.
Make a business case
When you ask for a pay increase, you provide proof of the value you have added to the company. Take the same approach here and explain to your boss how flexible working will actually be beneficial to them.
Some managers who resist remote working might still have an office-based mentality where presenteeism is key. But there are numerous studies that show that knowledge workers are more productive when working remotely.
And, when done as a purposeful business strategy, remote working can help teams prioritise work more clearly as well as allowing for more downtime and work-life balance.
Depending on your manager, your team and the work you do, it may not be feasible to ask to work from home five days a week.
It’s important that you are realistic about asking for what you want and also realistic about what you can deliver in return. Remote workers can be more productive but they can also be in danger of burning out so be thoughtful about what strategy will work best for both you and your manager.
Listen to their perspective
While conversations around remote working appear to be mostly positive, it can be a different situation behind the office doors.
Many managers and leaders are still hesitant about moving to a fully flexible working strategy and this can lead to workers feeling like they are not being listened to.
However, one of the best ways to combat that hesitancy from managers is to listen to their concerns and address them in a problem-solving manner.
Being able to alleviate some of your manager’s worries might make them more amenable to allowing for more flexibility.
Make expectations clear
If you do convince your boss to allow for a more flexible working plan than what they had originally considered, it’s important that both sides understand what is expected.
Without clearly defining the outcomes of the new set-up, misunderstandings can lead to disappointments and feelings of mistrust in the idea of flexible working.