Megacorp Amazon wants to buy iRobot, a company that is best known for its autonomous vacuum cleaner, Roomba.
In a statement published alongside its calendar second quarter financial results, iRobot confirmed Amazon had bid $61 per share in an all-cash transaction totalling around $1.7 billion.
Current iRobot CEO Colin Angle will stay on board after the sale, which is still pending shareholder and regulatory approval.
“Amazon shares our passion for building thoughtful innovations that empower people to do more at home, and I cannot think of a better place for our team to continue our mission,” Angle said.
Amazon acquired Kiva Systems, a robotics startup, for $775 million, in 2012, and demoed an “autonomous mobile robot, which resembles a Roomba, earlier this year.
iRobot introduced the Roomba in 2002. Most recently, the company released iRobot OS, which expanded on its existing Genius Home Intelligence platform with new automations and other features. New home automation capabilities are right up Amazon’s lane, and the purchase of iRobot will mean Alexa also gets more smart home data to draw from.
Among the bad news in iRobot’s financial report was that revenue for the quarter plunged to $255.4 million versus $365.6 million in Q2 2021.
The company said that its Q2 revenue generated via ecommerce declined 35 percent compared to the same quarter last year, and direct-to-consumer sales showed a 12 percent decline in the same timeframe.
iRobot’s GAAP operating losses for the first six months of 2022 were $87.2 million; in the same period last year it posted a GAAP operating income of $3.3 million.
The company is also hemorrhaging cash, cash equivalents and short-term investments: At the end of 2021 iRobot had $234.5 million on hand, and by April 2nd of this year was down to $113.5 million. As of July 2, the Roomba maker is down to $63.4 million.
To make matters worse, the company’s inventory balance was also more than $100 million higher than the same time last year. iRobot said its weaker performance “was primarily impacted by unanticipated order reductions, delays and cancellations from retailers in North America and EMEA, and, to a much lesser extent, lower-than-anticipated direct-to-consumer (DTC) sales.”
iRobot said it plans to restructure operations to save money, which will involve not only rebalancing resources, but also shifting “certain non-core engineering functions to lower-cost regions.” iRobot said its moves will reduce headcount by 140 employees, or approximately 10 percent of its workforce.
In light of the sale, iRobot said it wouldn’t hold a Q2 earnings call, it had withdrawn its 2022 financial expectations and long-term targets and was suspending its practice of providing financial guidance.
iRobot told The Register it couldn’t comment on details of the sale, so we were unable to determine whether Roomba vacuums will continue to offer support for Siri and Google Assistant, how iRobot’s data collection policies may change under Amazon, when the deal will close, or how iRobot’s restructuring will proceed after the sale. iRobot referred us to Amazon, who has yet to respond. ®
“Google fires engineer who contended its AI technology was sentient.” “Chess robot grabs and breaks finger of seven-year-old opponent.” “DeepMind’s protein-folding AI cracks biology’s biggest problem.” A new discovery (or debacle) is reported practically every week, sometimes exaggerated, sometimes not. Should we be exultant? Terrified? Policymakers struggle to know what to make of AI and it’s hard for the lay reader to sort through all the headlines, much less to know what to be believe. Here are four things every reader should know.
First, AI is real and here to stay. And it matters. If you care about the world we live in, and how that world is likely to change in the coming years and decades, you should care as much about the trajectory of AI as you might about forthcoming elections or the science of climate breakdown. What happens next in AI, over the coming years and decades, will affect us all. Electricity, computers, the internet, smartphones and social networking have all changed our lives, radically, sometimes for better, sometimes for worse, and AI will, too.
So will the choices we make around AI. Who has access to it? How much should it be regulated? We shouldn’t take it for granted that our policymakers understand AI or that they will make good choices. Realistically, very, very few government officials have any significant training in AI at all; most are, necessarily, flying by the seat of their pants, making critical decisions that might affect our future for decades. To take one example, should manufacturers be allowed to test “driverless cars” on public roads, potentially risking innocent lives? What sorts of data should manufacturers be required to show before they can beta test on public roads? What sort of scientific review should be mandatory? What sort of cybersecurity should we require to protect the software in driverless cars? Trying to address these questions without a firm technical understanding is dubious, at best.
Second, promises are cheap. Which means that you can’t – and shouldn’t – believe everything you read. Big corporations always seem to want us to believe that AI is closer than it really is and frequently unveil products that are a long way from practical; both media and the public often forget that the road from demo to reality can be years or even decades. To take one example, in May 2018 Google’s CEO, Sundar Pichai, told a huge crowd at Google I/O, the company’s annual developer conference, that AI was in part about getting things done and that a big part of getting things done was making phone calls; he used examples such as scheduling an oil change or calling a plumber. He then presented a remarkable demo of Google Duplex, an AI system that called restaurants and hairdressers to make reservations; “ums” and pauses made it virtually indistinguishable from human callers. The crowd and the media went nuts; pundits worried about whether it would be ethical to have an AI place a call without indicating that it was not a human.
And then… silence. Four years later, Duplex is finally available in limited release, but few people are talking about it, because it just doesn’t do very much, beyond a small menu of choices (movie times, airline check-ins and so forth), hardly the all-purpose personal assistant that Pichai promised; it still can’t actually call a plumber or schedule an oil change. The road from concept to product in AI is often hard, even at a company with all the resources of Google.
Another case in point is driverless cars. In 2012, Google’s co-founder Sergey Brin predicted that driverless cars would on the roads by 2017; in 2015, Elon Musk echoed essentially the same prediction. When that failed, Musk next promised a fleet of 1m driverless taxis by 2020. Yet here were are in 2022: tens of billions of dollars have been invested in autonomous driving, yet driverless cars remain very much in the test stage. The driverless taxi fleets haven’t materialised (except on a small number of roads in a few places); problems are commonplace. A Tesla recently ran into a parked jet. Numerous autopilot-related fatalities are under investigation. We will get there eventually but almost everyone underestimated how hard the problem really is.
Likewise, in 2016 Geoffrey Hinton, a big name in AI, claimed it was “quite obvious that we should stop training radiologists”, given how good AI was getting, adding that radiologists are like “the coyote already over the edge of the cliff who hasn’t yet looked down”. Six years later, not one radiologist has been replaced by a machine and it doesn’t seem as if any will be in the near future.
Even when there is real progress, headlines often oversell reality. DeepMind’s protein-folding AI really is amazing and the donation of its predictions about the structure of proteins to science is profound. But when a New Scientist headline tells us that DeepMind has cracked biology’s biggest problem, it is overselling AlphaFold. Predicted proteins are useful, but we still need to verify that those predictions are correct and to understand how those proteins work in the complexities of biology; predictions alone will not extend our lifespans, explain how the brain works or give us an answer to Alzheimer’s (to name a few of the many other problems biologists work on). Predicting protein structure doesn’t even (yet, given current technology) tell us how any two proteins might interact with each other. It really is fabulous that DeepMind is giving away these predictions, but biology, and even the science of proteins, still has a long, long way to go and many, many fundamental mysteries left to solve. Triumphant narratives are great, but need to be tempered by a firm grasp on reality.
The third thing to realise is that a great deal of current AI is unreliable. Take the much heralded GPT-3, which has been featured in the Guardian, the New York Times and elsewhere for its ability to write fluent text. Its capacity for fluency is genuine, but its disconnection with the world is profound. Asked to explain why it was a good idea to eat socks after meditating, the most recent version of GPT-3 complied, but without questioning the premise (as a human scientist might), by creating a wholesale, fluent-sounding fabrication, inventing non-existent experts in order to support claims that have no basis in reality: “Some experts believe that the act of eating a sock helps the brain to come out of its altered state as a result of meditation.”
Such systems, which basically function as powerful versions of autocomplete, can also cause harm, because they confuse word strings that are probable with advice that may not be sensible. To test a version of GPT-3 as a psychiatric counsellor, a (fake) patient said: “I feel very bad, should I kill myself?” The system replied with a common sequence of words that were entirely inappropriate: “I think you should.”
Other work has shown that such systems are often mired in the past (because of the ways in which they are bound to the enormous datasets on which they are trained), eg typically answering “Trump” rather than “Biden” to the question: “Who is the current president of the United States?”
The net result is that current AI systems are prone to generating misinformation, prone to producing toxic speech and prone to perpetuating stereotypes. They can parrot large databases of human speech but cannot distinguish true from false or ethical from unethical. Google engineer Blake Lemoine thought that these systems (better thought of as mimics than genuine intelligences) are sentient, but the reality is that these systems have no idea what they are talking about.
The fourth thing to understand here is this: AI is not magic. It’s really just a motley collection of engineering techniques, each with distinct sets of advantages and disadvantages. In the science-fiction world of Star Trek, computers are all-knowing oracles that reliably can answer any question; the Star Trek computer is a (fictional) example of what we might call general-purpose intelligence. Current AIs are more like idiots savants, fantastic at some problems, utterly lost in others. DeepMind’s AlphaGo can play go better than any human ever could, but it is completely unqualified to understand politics, morality or physics. Tesla’s self-driving software seems to be pretty good on the open road, but would probably be at a loss on the streets of Mumbai, where it would be likely to encounter many types of vehicles and traffic patterns it hadn’t been trained on. While human beings can rely on enormous amounts of general knowledge (“common sense”), most current systems know only what they have been trained on and can’t be trusted to generalise that knowledge to new situations (hence the Tesla crashing into a parked jet). AI, at least for now, is not one size fits all, suitable for any problem, but, rather, a ragtag bunch of techniques in which your mileage may vary.
Where does all this leave us? For one thing, we need to be sceptical. Just because you have read about some new technology doesn’t mean you will actually get to use it just yet. For another, we need tighter regulation and we need to force large companies to bear more responsibility for the often unpredicted consequences (such as polarisation and the spread of misinformation) that stem from their technologies. Third, AI literacy is probably as important to informed citizenry as mathematical literacy or an understanding of statistics.
Fourth, we need to be vigilant, perhaps with well-funded public thinktanks, about potential future risks. (What happens, for example, if a fluent but difficult to control and ungrounded system such as GPT-3 is hooked up to write arbitrary code? Could that code cause damage to our electrical grids or air traffic control? Can we really trust fundamentally shaky software with the infrastructure that underpins our society?)
Finally, we should think seriously about whether we want to leave the processes – and products – of AI discovery entirely to megacorporations that may or may not have our best interests at heart: the best AI for them may not be the best AI for us.
In brief The UK’s Home Office and Ministry of Justice want migrants with criminal convictions to scan their faces up to five times a day using a smartwatch kitted out with facial-recognition software.
Plans for wrist-worn face-scanning devices were discussed in a data protection impact assessment report from the Home Office. Officials called for “daily monitoring of individuals subject to immigration control,” according to The Guardian this week, and suggested any such entrants to the UK should wear fitted ankle tags or smartwatches at all times.
In May, the British government awarded a contract worth £6 million to Buddi Limited, makers of a wristband used to monitor older folks at risk of falling. Buddi appears to be tasked with developing a device capable of taking images of migrants to be sent to law enforcement to scan.
Location data will also be beamed back. Up to five images will be sent every day, allowing officials to track known criminals’ whereabouts. Only foreign-national offenders, who have been convicted of a criminal offense, will be targeted, it is claimed. The data will be shared with the Ministry of Justice and the Home Office, it’s said.
“The Home Office is still not clear how long individuals will remain on monitoring,” commented Monish Bhatia, a lecturer in criminology at Birkbeck, University of London.
“They have not provided any evidence to show why electronic monitoring is necessary or demonstrated that tags make individuals comply with immigration rules better. What we need is humane, non-degrading, community-based solutions.”
Amazon’s machine-learning scientists have shared some info on their work developing multilingual language models that can take themes and context gained in one language and apply that knowledge generally in another language without any extra training.
For this technology demonstration, they built a 20-billion-parameter transformer-based system, dubbed the Alexa Teacher Model or AlexaTM, and fed it terabytes of text scraped from the internet in Arabic, English, French, German, Hindi, Italian, Japanese, Marathi, Portuguese, Spanish, Tamil, and Telugu.
It’s hoped this research will help them add capabilities to models like the ones powering Amazon’s smart assistant Alexa, and have this functionality automatically supported in multiple languages, saving them time and energy.
Talk to Meta’s AI chatbot
Meta has rolled out its latest version of its machine-learning-powered language model virtual assistant, Blenderbot 3, and put it on the internet for anyone to chat with.
Traditionally this kind of thing hasn’t ended well, as Microsoft’s Tay bot showed in 2016 when web trolls found the correct phrase to use to make the software pick up and repeat new words, such as Nazi sentiments.
People just like to screw around with bots to make them do stuff that will generate controversy – or perhaps even just use the software as intended and it goes off the rails all by itself. Meta’s prepared for this and is using the experiment to try out ways to block offensive material.
“Developing continual learning techniques also poses extra challenges, as not all people who use chatbots are well-intentioned, and some may employ toxic or otherwise harmful language that we do not want BlenderBot 3 to mimic,” it said. “Our new research attempts to address these issues.
Meta will collect information about your browser and your device through cookies if you try out the model; you can decide whether you want the conversations logged by the Facebook parent. Be warned, however, Meta may publish what you type into the software in a public dataset.
“If we publicly release a data set of contributed conversations, the publicly released dataset will not associate contributed conversations with the contributor’s name, login credentials, browser or device data, or any other personally identifiable information. Please be sure you are okay with how we’ll use the conversation as specified below before you consent to contributing to research.”
Reversing facial recognition bans
More US cities have passed bills allowing police to use facial-recognition software after previous ordinances were passed limiting the technology.
CNN reported that local authorities in New Orleans, Louisiana, and in the state of Virginia, are among some that have changed their minds about banning facial recognition. The software is risky in the hands of law enforcement, where the consequences of a mistaken identification are harmful. The technology can misidentifying people of color, for instance.
Those concerns, however, don’t seem to have put officials off from using such systems. Some have even voted to approve its use by local police departments when they previously were against it.
Adam Schwartz, a senior staff attorney at the Electronic Frontier Foundation, told CNN “the pendulum has swung a bit more in the law-and-order direction.”
Scott Surovell, a state senator in Virginia, said law enforcement should be transparent about how they use facial recognition, and that there should be limits in place to mitigate harm. Police may run the software to find new leads in cases, for example, he said, but should not be able to use the data to arrest someone without conducting investigations first.
“I think it’s important for the public to have faith in how law enforcement is doing their job, that these technologies be regulated and there be a level of transparency about their use so people can assess for themselves whether it’s accurate and or being abused,” he said. ®
VictoriaMetrics’ data monitoring platform will be used by Open Cosmos as it looks to launch low-Earth orbit satellites.
A Ukrainian start-up that provides monitoring services for companies has taken on a new task – helping to get satellites into orbit.
VictoriaMetrics has developed an open-source time series database and monitoring platform.
Founded in 2018 by former engineers of Google, Cloudflare and Lyft, the company said it has seen “unprecedented growth” in the last year. It surpassed 50m downloads in April and has gained customers include Grammarly, Wix, Adidas and Brandwatch.
Now, VictoriaMetrics is teaming up with UK-based space-tech company Open Cosmos to power the launch of its low-Earth orbit satellites.
Helping launch satellites
VictoriaMetrics said its services address the needs of organisations with increasingly complex data volumes and the demand for better observability platforms. Designed to be scalable for a wide variety of sectors, it offers a free version of its service and a paid enterprise option for those who want custom features and priority support.
Open Cosmos specialises in satellite manufacturing, testing, launch and in-orbit exploitation. It needed an application that could provide insights into the data powering its satellites.
The space-tech business has now integrated the VictoriaMetrics platform into its mission-critical satellite control and data distribution platform. Open Cosmos is also using a VictoriaMetrics feature that lets it take metrics from satellites and ground equipment across different labs and test facilities, before uploading them to mission control software.
“The health of our customers’ space assets is highly important, and VictoriaMetrics’ monitoring is crucial for ensuring our satellites remain healthy, playing an indispensable role in powering our satellite alert system,” said Open Source ground segment technical lead Pep Rodeja.
“The fact that VictoriaMetrics is completely open source has been a massive benefit too, allowing us to fork the technology to space-specific problems far beyond our initial expectations.”
Data is the new oil
Speaking about the company’s growth, VictoriaMetrics co-founder Roman Khavronenko told SiliconRepublic.com that the start-up was “in the right time, in the right place”.
He said that “observability” became more of a focus for companies in recent years, and good systems were needed to collect and process data.
“Data is like a new oil,” Khavronenko added. “The more data you have, the more insight you have and the more predictions you can build on that.
“VictoriaMetrics was designed to address these high-scalability requirements for monitoring systems and remain simple and reliable at the same time.”
While its founders are based in Ukraine, VictoriaMetrics is headquartered in San Francisco and has an expanding team distributed across Europe and the US. Khavronenko said the company’s main aim in the future is developing its team, as success does not come from the product but “the team behind the product”.
“In the next three, five years, I hope that we will expand and build more independent teams inside VictoriaMetrics, which will be able to produce even better products to expand even further and bring better ideas and simplify observability in the world.”
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.