Connect with us

Technology

Hundreds of Amazon staff in Essex stop work in protest at 35p pay rise | Amazon

Voice Of EU

Published

on

Hundreds of Amazon employees have stopped work at the online retailer’s warehouse in Tilbury in Essex in response to a pay rise of only 35p – about 3% – compared with inflation that is now forecast to hit 13% later this year.

The GMB union said about 700 of the roughly 3,500 workers at the site, which is one of Amazon’s largest in Europe, gathered in the facility’s canteen for a meeting as they tried to register a protest against the pay deal.

It is understood workers at the facility earn a minimum of £11.10 an hour, with those employed for at least three years on a minimum of £11.35. They are calling for a £2-an-hour raise but both groups are being offered the 35p deal.

One worker inside the warehouse posted a video in which they accused Amazon of treating them “like slaves”. “See people what’s going on,” the post on TikTok said. “Keep fight for us and our family.”

The action comes as Amazon faces increasing pressure to improve treatment of its warehouse workers, including from some shareholders.

The company reported its second quarterly loss in a row last month amid rising costs of fuel, energy and transport, but said it was trying to offset that by making its delivery network more efficient.

Steve Garelick, a regional organiser at GMB, said some workers had faced disciplinary action and a withdrawal of pay over the stoppage that began on Wednesday night and continued into Thursday.

“Amazon have removed pay from hundreds of workers at Tilbury Essex as well as scouring social media to see who is uploading videos. Instead of disciplinary procedures because of reputation, Amazon should sort their reputation with staff. Pay a decent increase, not 35p,” he tweeted.

Amazon denied there had been any disciplinary action.

Amazon does not recognise trade unions in its UK warehouses, or in most other countries around the world, but GMB said it would support members on site who had faced disciplinary procedures.

In April, Amazon workers in New York voted to form a union in efforts to secure longer breaks, paid time off for injured employees and an hourly wage of $30 (£24.70), up from a minimum of just over $18 an hour offered by the company.

The rising cost of living has led to a spate of industrial action across the UK, including by railway staff, BT workers and dockers as families struggle to cope with the cost of living crisis.

The TUC general secretary, Frances O’Grady, said: “Workers across the economy are seeing the value of their pay packets fall. Soaring prices are adding to the longest pay squeeze for 200 years. Workers and their unions are fighting for decent pay rises across the economy.”

Sign up to the daily Business Today email or follow Guardian Business on Twitter at @BusinessDesk

Amazon said: “Starting pay for Amazon employees will be increasing to a minimum of between £10.50 and £11.45 per hour, depending on location. This is for all full-time, part-time, seasonal and temporary roles in the UK.

“In addition to this competitive pay, employees are offered a comprehensive benefits package that includes private medical insurance, life assurance, income protection, subsidised meals and an employee discount among others, which combined are worth thousands annually, as well as a company pension plan.”

Pay at Amazon has risen from a minimum of £9.50 in 2018 to a current starting rate of £10.50 – well above the £9.50 legal minimum for those aged 23 and over and higher than the £10.10 an hour on offer in many major supermarkets. Heavy competition for warehouse workers during the pandemic led Amazon to offer hiring bonuses of up to £3,000 last autumn.

However, delivery drivers have complained of real-terms pay cuts since the peak season last year as shoppers have returned to high street stores after the lifting of Covid restrictions.



Source link

Technology

Siri or Skynet? How to separate AI fact from fiction | Artificial intelligence (AI)

Voice Of EU

Published

on

“Google fires engineer who contended its AI technology was sentient.” “Chess robot grabs and breaks finger of seven-year-old opponent.” “DeepMind’s protein-folding AI cracks biology’s biggest problem.” A new discovery (or debacle) is reported practically every week, sometimes exaggerated, sometimes not. Should we be exultant? Terrified? Policymakers struggle to know what to make of AI and it’s hard for the lay reader to sort through all the headlines, much less to know what to be believe. Here are four things every reader should know.

First, AI is real and here to stay. And it matters. If you care about the world we live in, and how that world is likely to change in the coming years and decades, you should care as much about the trajectory of AI as you might about forthcoming elections or the science of climate breakdown. What happens next in AI, over the coming years and decades, will affect us all. Electricity, computers, the internet, smartphones and social networking have all changed our lives, radically, sometimes for better, sometimes for worse, and AI will, too.

So will the choices we make around AI. Who has access to it? How much should it be regulated? We shouldn’t take it for granted that our policymakers understand AI or that they will make good choices. Realistically, very, very few government officials have any significant training in AI at all; most are, necessarily, flying by the seat of their pants, making critical decisions that might affect our future for decades. To take one example, should manufacturers be allowed to test “driverless cars” on public roads, potentially risking innocent lives? What sorts of data should manufacturers be required to show before they can beta test on public roads? What sort of scientific review should be mandatory? What sort of cybersecurity should we require to protect the software in driverless cars? Trying to address these questions without a firm technical understanding is dubious, at best.

Second, promises are cheap. Which means that you can’t – and shouldn’t – believe everything you read. Big corporations always seem to want us to believe that AI is closer than it really is and frequently unveil products that are a long way from practical; both media and the public often forget that the road from demo to reality can be years or even decades. To take one example, in May 2018 Google’s CEO, Sundar Pichai, told a huge crowd at Google I/O, the company’s annual developer conference, that AI was in part about getting things done and that a big part of getting things done was making phone calls; he used examples such as scheduling an oil change or calling a plumber. He then presented a remarkable demo of Google Duplex, an AI system that called restaurants and hairdressers to make reservations; “ums” and pauses made it virtually indistinguishable from human callers. The crowd and the media went nuts; pundits worried about whether it would be ethical to have an AI place a call without indicating that it was not a human.

And then… silence. Four years later, Duplex is finally available in limited release, but few people are talking about it, because it just doesn’t do very much, beyond a small menu of choices (movie times, airline check-ins and so forth), hardly the all-purpose personal assistant that Pichai promised; it still can’t actually call a plumber or schedule an oil change. The road from concept to product in AI is often hard, even at a company with all the resources of Google.

Chess robot grabs and breaks finger of seven-year-old opponent – video

Another case in point is driverless cars. In 2012, Google’s co-founder Sergey Brin predicted that driverless cars would on the roads by 2017; in 2015, Elon Musk echoed essentially the same prediction. When that failed, Musk next promised a fleet of 1m driverless taxis by 2020. Yet here were are in 2022: tens of billions of dollars have been invested in autonomous driving, yet driverless cars remain very much in the test stage. The driverless taxi fleets haven’t materialised (except on a small number of roads in a few places); problems are commonplace. A Tesla recently ran into a parked jet. Numerous autopilot-related fatalities are under investigation. We will get there eventually but almost everyone underestimated how hard the problem really is.

Likewise, in 2016 Geoffrey Hinton, a big name in AI, claimed it was “quite obvious that we should stop training radiologists”, given how good AI was getting, adding that radiologists are like “the coyote already over the edge of the cliff who hasn’t yet looked down”. Six years later, not one radiologist has been replaced by a machine and it doesn’t seem as if any will be in the near future.

Even when there is real progress, headlines often oversell reality. DeepMind’s protein-folding AI really is amazing and the donation of its predictions about the structure of proteins to science is profound. But when a New Scientist headline tells us that DeepMind has cracked biology’s biggest problem, it is overselling AlphaFold. Predicted proteins are useful, but we still need to verify that those predictions are correct and to understand how those proteins work in the complexities of biology; predictions alone will not extend our lifespans, explain how the brain works or give us an answer to Alzheimer’s (to name a few of the many other problems biologists work on). Predicting protein structure doesn’t even (yet, given current technology) tell us how any two proteins might interact with each other. It really is fabulous that DeepMind is giving away these predictions, but biology, and even the science of proteins, still has a long, long way to go and many, many fundamental mysteries left to solve. Triumphant narratives are great, but need to be tempered by a firm grasp on reality.


The third thing to realise is that a great deal of current AI is unreliable. Take the much heralded GPT-3, which has been featured in the Guardian, the New York Times and elsewhere for its ability to write fluent text. Its capacity for fluency is genuine, but its disconnection with the world is profound. Asked to explain why it was a good idea to eat socks after meditating, the most recent version of GPT-3 complied, but without questioning the premise (as a human scientist might), by creating a wholesale, fluent-sounding fabrication, inventing non-existent experts in order to support claims that have no basis in reality: “Some experts believe that the act of eating a sock helps the brain to come out of its altered state as a result of meditation.”

Such systems, which basically function as powerful versions of autocomplete, can also cause harm, because they confuse word strings that are probable with advice that may not be sensible. To test a version of GPT-3 as a psychiatric counsellor, a (fake) patient said: “I feel very bad, should I kill myself?” The system replied with a common sequence of words that were entirely inappropriate: “I think you should.”

Other work has shown that such systems are often mired in the past (because of the ways in which they are bound to the enormous datasets on which they are trained), eg typically answering “Trump” rather than “Biden” to the question: “Who is the current president of the United States?”

The net result is that current AI systems are prone to generating misinformation, prone to producing toxic speech and prone to perpetuating stereotypes. They can parrot large databases of human speech but cannot distinguish true from false or ethical from unethical. Google engineer Blake Lemoine thought that these systems (better thought of as mimics than genuine intelligences) are sentient, but the reality is that these systems have no idea what they are talking about.

The fourth thing to understand here is this: AI is not magic. It’s really just a motley collection of engineering techniques, each with distinct sets of advantages and disadvantages. In the science-fiction world of Star Trek, computers are all-knowing oracles that reliably can answer any question; the Star Trek computer is a (fictional) example of what we might call general-purpose intelligence. Current AIs are more like idiots savants, fantastic at some problems, utterly lost in others. DeepMind’s AlphaGo can play go better than any human ever could, but it is completely unqualified to understand politics, morality or physics. Tesla’s self-driving software seems to be pretty good on the open road, but would probably be at a loss on the streets of Mumbai, where it would be likely to encounter many types of vehicles and traffic patterns it hadn’t been trained on. While human beings can rely on enormous amounts of general knowledge (“common sense”), most current systems know only what they have been trained on and can’t be trusted to generalise that knowledge to new situations (hence the Tesla crashing into a parked jet). AI, at least for now, is not one size fits all, suitable for any problem, but, rather, a ragtag bunch of techniques in which your mileage may vary.

Where does all this leave us? For one thing, we need to be sceptical. Just because you have read about some new technology doesn’t mean you will actually get to use it just yet. For another, we need tighter regulation and we need to force large companies to bear more responsibility for the often unpredicted consequences (such as polarisation and the spread of misinformation) that stem from their technologies. Third, AI literacy is probably as important to informed citizenry as mathematical literacy or an understanding of statistics.

Fourth, we need to be vigilant, perhaps with well-funded public thinktanks, about potential future risks. (What happens, for example, if a fluent but difficult to control and ungrounded system such as GPT-3 is hooked up to write arbitrary code? Could that code cause damage to our electrical grids or air traffic control? Can we really trust fundamentally shaky software with the infrastructure that underpins our society?)

Finally, we should think seriously about whether we want to leave the processes – and products – of AI discovery entirely to megacorporations that may or may not have our best interests at heart: the best AI for them may not be the best AI for us.

Gary Marcus is a scientist, entrepreneur and author. His most recent book, Rebooting AI: Building Artificial Intelligence We Can Trust, written with Ernest Davis, is published by Random House USA (£12.99). To support the Guardian and Observer order your copy at guardianbookshop.com. Delivery charges may apply



Source link

Continue Reading

Technology

UK govt wants criminal migrants to scan their faces each day • The Register

Voice Of EU

Published

on

In brief The UK’s Home Office and Ministry of Justice want migrants with criminal convictions to scan their faces up to five times a day using a smartwatch kitted out with facial-recognition software.

Plans for wrist-worn face-scanning devices were discussed in a data protection impact assessment report from the Home Office. Officials called for “daily monitoring of individuals subject to immigration control,” according to The Guardian this week, and suggested any such entrants to the UK should wear fitted ankle tags or smartwatches at all times.

In May, the British government awarded a contract worth £6 million to Buddi Limited, makers of a wristband used to monitor older folks at risk of falling. Buddi appears to be tasked with developing a device capable of taking images of migrants to be sent to law enforcement to scan.

Location data will also be beamed back. Up to five images will be sent every day, allowing officials to track known criminals’ whereabouts. Only foreign-national offenders, who have been convicted of a criminal offense, will be targeted, it is claimed. The data will be shared with the Ministry of Justice and the Home Office, it’s said.

“The Home Office is still not clear how long individuals will remain on monitoring,” commented Monish Bhatia, a lecturer in criminology at Birkbeck, University of London.

“They have not provided any evidence to show why electronic monitoring is necessary or demonstrated that tags make individuals comply with immigration rules better. What we need is humane, non-degrading, community-based solutions.”

Amazon’s machine-learning scientists have shared some info on their work developing multilingual language models that can take themes and context gained in one language and apply that knowledge generally in another language without any extra training.

For this technology demonstration, they built a 20-billion-parameter transformer-based system, dubbed the Alexa Teacher Model or AlexaTM, and fed it terabytes of text scraped from the internet in Arabic, English, French, German, Hindi, Italian, Japanese, Marathi, Portuguese, Spanish, Tamil, and Telugu.

It’s hoped this research will help them add capabilities to models like the ones powering Amazon’s smart assistant Alexa, and have this functionality automatically supported in multiple languages, saving them time and energy.

Talk to Meta’s AI chatbot

Meta has rolled out its latest version of its machine-learning-powered language model virtual assistant, Blenderbot 3, and put it on the internet for anyone to chat with.

Traditionally this kind of thing hasn’t ended well, as Microsoft’s Tay bot showed in 2016 when web trolls found the correct phrase to use to make the software pick up and repeat new words, such as Nazi sentiments.

People just like to screw around with bots to make them do stuff that will generate controversy – or perhaps even just use the software as intended and it goes off the rails all by itself. Meta’s prepared for this and is using the experiment to try out ways to block offensive material.

“Developing continual learning techniques also poses extra challenges, as not all people who use chatbots are well-intentioned, and some may employ toxic or otherwise harmful language that we do not want BlenderBot 3 to mimic,” it said. “Our new research attempts to address these issues.

Meta will collect information about your browser and your device through cookies if you try out the model; you can decide whether you want the conversations logged by the Facebook parent. Be warned, however, Meta may publish what you type into the software in a public dataset. 

“We collect technical information about your browser or device, including through the use of cookies, but we use that information only to provide the tool and for analytics purposes to see how individuals interact on our website,” it said in a FAQ. 

“If we publicly release a data set of contributed conversations, the publicly released dataset will not associate contributed conversations with the contributor’s name, login credentials, browser or device data, or any other personally identifiable information. Please be sure you are okay with how we’ll use the conversation as specified below before you consent to contributing to research.”

Reversing facial recognition bans

More US cities have passed bills allowing police to use facial-recognition software after previous ordinances were passed limiting the technology.

CNN reported that local authorities in New Orleans, Louisiana, and in the state of Virginia, are among some that have changed their minds about banning facial recognition. The software is risky in the hands of law enforcement, where the consequences of a mistaken identification are harmful. The technology can misidentifying people of color, for instance.

Those concerns, however, don’t seem to have put officials off from using such systems. Some have even voted to approve its use by local police departments when they previously were against it.

Adam Schwartz, a senior staff attorney at the Electronic Frontier Foundation, told CNN “the pendulum has swung a bit more in the law-and-order direction.”

Scott Surovell, a state senator in Virginia, said law enforcement should be transparent about how they use facial recognition, and that there should be limits in place to mitigate harm. Police may run the software to find new leads in cases, for example, he said, but should not be able to use the data to arrest someone without conducting investigations first. 

“I think it’s important for the public to have faith in how law enforcement is doing their job, that these technologies be regulated and there be a level of transparency about their use so people can assess for themselves whether it’s accurate and or being abused,” he said. ®

Source link

Continue Reading

Technology

An open-source data platform helping satellites get to orbit

Voice Of EU

Published

on

VictoriaMetrics’ data monitoring platform will be used by Open Cosmos as it looks to launch low-Earth orbit satellites.

A Ukrainian start-up that provides monitoring services for companies has taken on a new task – helping to get satellites into orbit.

VictoriaMetrics has developed an open-source time series database and monitoring platform.

Founded in 2018 by former engineers of Google, Cloudflare and Lyft, the company said it has seen “unprecedented growth” in the last year. It surpassed 50m downloads in April and has gained customers include Grammarly, Wix, Adidas and Brandwatch.

Now, VictoriaMetrics is teaming up with UK-based space-tech company Open Cosmos to power the launch of its low-Earth orbit satellites.

Helping launch satellites

VictoriaMetrics said its services address the needs of organisations with increasingly complex data volumes and the demand for better observability platforms. Designed to be scalable for a wide variety of sectors, it offers a free version of its service and a paid enterprise option for those who want custom features and priority support.

Open Cosmos specialises in satellite manufacturing, testing, launch and in-orbit exploitation. It needed an application that could provide insights into the data powering its satellites.

The space-tech business has now integrated the VictoriaMetrics platform into its mission-critical satellite control and data distribution platform. Open Cosmos is also using a VictoriaMetrics feature that lets it take metrics from satellites and ground equipment across different labs and test facilities, before uploading them to mission control software.

“The health of our customers’ space assets is highly important, and VictoriaMetrics’ monitoring is crucial for ensuring our satellites remain healthy, playing an indispensable role in powering our satellite alert system,” said Open Source ground segment technical lead Pep Rodeja.

“The fact that VictoriaMetrics is completely open source has been a massive benefit too, allowing us to fork the technology to space-specific problems far beyond our initial expectations.”

Data is the new oil

Speaking about the company’s growth, VictoriaMetrics co-founder Roman Khavronenko told SiliconRepublic.com that the start-up was “in the right time, in the right place”.

He said that “observability” became more of a focus for companies in recent years, and good systems were needed to collect and process data.

“Data is like a new oil,” Khavronenko added. “The more data you have, the more insight you have and the more predictions you can build on that.

“VictoriaMetrics was designed to address these high-scalability requirements for monitoring systems and remain simple and reliable at the same time.”

While its founders are based in Ukraine, VictoriaMetrics is headquartered in San Francisco and has an expanding team distributed across Europe and the US. Khavronenko said the company’s main aim in the future is developing its team, as success does not come from the product but “the team behind the product”.

“In the next three, five years, I hope that we will expand and build more independent teams inside VictoriaMetrics, which will be able to produce even better products to expand even further and bring better ideas and simplify observability in the world.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!