Connect with us

Technology

More than 1,000 humans fail to beat AI contender in top crossword battle • The Register

Voice Of EU

Published

on

In brief An AI system has bested nearly 1,300 human competitors in the annual American Crossword Puzzle Tournament to achieve the top score.

The computer, named Dr Fill, is the brainchild of computer scientist Matt Ginsberg, who designed its software to automatically fill out crosswords using a mixture of “good old-fashioned AI” and more modern machine-learning techniques, according to Slate.

It was able to solve multiple word conundrums fast with fewer errors than its opponents. Dr Fill, however, was not eligible for the $3,000 cash prize, which instead went to the best human player, a man named Tyler Hinman, who presumably isn’t feeling somewhat redundant.

Ginsberg’s machine contained a computer running a 64-core CPU and two GPUs, and was trained on tons of text scraped from Wikipedia to learn words, and a database of crossword clues and their answers to parse the competition questions. You can watch it in action below.

Youtube Video

Google defends large language models like the ones used by Google

In a new paper, researchers from Google and University California, Berkeley have outlined various ways to slash the environmental impact of the large amounts of energy consumed during the training of text-generation models like the ones used by Google.

Large language models are a particularly controversial area for The Chocolate Factory. The co-leads of its AI Ethics research group, Timnit Gebru and Margaret Mitchell, were ousted this year over a paper that detailed the power usage and financial costs of these models as well as concerns over their inscrutable nature.

Now, Google has published a counter-study. Large language models don’t have that big of a carbon footprint if they are trained using resources from data centers running efficiently in countries using renewable energy, the internet giant argued. You can read the whole thing here.

The paper coauthored by Gebru, computational linguistics professor Emily M. Bender, and others was shot down by Google for supposedly not including enough references to relevant research. What’s unfortunate here is that Google’s latest paper failed to mention or reference Gebru and Bender’s paper in their study. One of the researchers later confirmed they were going to add a hat-tip to the pair in an updated version of their study.

Beware of deepfake satellite imagery

Academics are warning of the potential dangers of fake AI-generated satellite images.

A team of geographers led by the University of Washington in the US demonstrated how machine-learning algorithms could be trained to spit out fake geospatial images. The outputs could be used to disrupt applications relying on satellite imagery, such as Google Earth or even military software.

“This isn’t just Photoshopping things. It’s making data look uncannily realistic,” said Bo Zhao, assistant professor of geography at the UW and lead author of the study published in the journal Cartography and Geographic Information Science, this week. “The techniques are already there. We’re just trying to expose the possibility of using the same techniques, and of the need to develop a coping strategy for it.”

Zhao showed examples of how real images from cities could be manipulated by pasting on fake buildings to create made-up towns or adding false fires to mimic natural disasters. While it’ll take a lot more than deepfakes to attack real software systems, the researchers are raising awareness of it now in the hopes they can be one step ahead of the threat.

iGiant to create new jobs in AI

Apple pledged to invest $430bn in the US to employ 20,000 new staff focusing on emerging technologies, like AI to new chips, over the next five years. Apple plans to spend $1bn to launch a new campus in North Carolina too, with around 3,000 employees working on advanced research and development.

“At this moment of recovery and rebuilding, Apple is doubling down on our commitment to US innovation and manufacturing with a generational investment reaching communities across all 50 states,” Apple’s CEO. Tim Cook, announced this week.

“We’re creating jobs in cutting-edge fields — from 5G to silicon engineering to artificial intelligence — investing in the next generation of innovative new businesses, and in all our work, building toward a greener and more equitable future.”

New SiFive AI chip produced by Samsung coming soon

An AI accelerator system-on-chip developed in collaboration between SiFive and a mystery partner is set to be manufactured by chip Samsung.

Not much is known about the chip, except that it’s based on a 14nm FinFET design and contains SiFive RISC-V cores as well as PCIe Gen. 4 connectivity and quad-channel 32-bit LPDDR4 memory.

SiFive didn’t reveal who the chip was for or when it would be sent off for mass production.

“Working in partnership with Samsung Foundry has accelerated SiFive’s ability to deliver our highly-efficient and configurable approach for SoC design and implementation,” Yunsup Lee, CTO of SiFive, said in a statement.

“We’re excited to continue to co-innovate with Samsung Foundry as we launch our latest SiFive Intelligence products to accelerate the development of next-generation AI SoCs with Samsung’s advanced process technology.” ®



Source link

Technology

Apple’s plan to scan images will allow governments into smartphones | John Naughton

Voice Of EU

Published

on

For centuries, cryptography was the exclusive preserve of the state. Then, in 1976, Whitfield Diffie and Martin Hellman came up with a practical method for establishing a shared secret key over an authenticated (but not confidential) communications channel without using a prior shared secret. The following year, three MIT scholars – Ron Rivest, Adi Shamir and Leonard Adleman – came up with the RSA algorithm (named after their initials) for implementing it. It was the beginning of public-key cryptography – at least in the public domain.

From the very beginning, state authorities were not amused by this development. They were even less amused when in 1991 Phil Zimmermann created Pretty Good Privacy (PGP) software for signing, encrypting and decrypting texts, emails, files and other things. PGP raised the spectre of ordinary citizens – or at any rate the more geeky of them – being able to wrap their electronic communications in an envelope that not even the most powerful state could open. In fact, the US government was so enraged by Zimmermann’s work that it defined PGP as a munition, which meant that it was a crime to export it to Warsaw Pact countries. (The cold war was still relatively hot then.)

In the four decades since then, there’s been a conflict between the desire of citizens to have communications that are unreadable by state and other agencies and the desire of those agencies to be able to read them. The aftermath of 9/11, which gave states carte blanche to snoop on everything people did online, and the explosion in online communication via the internet and (since 2007) smartphones, has intensified the conflict. During the Clinton years, US authorities tried (and failed) to ensure that all electronic devices should have a secret backdoor, while the Snowden revelations in 2013 put pressure on internet companies to offer end-to-end encryption for their users’ communications that would make them unreadable by either security services or the tech companies themselves. The result was a kind of standoff: between tech companies facilitating unreadable communications and law enforcement and security agencies unable to access evidence to which they had a legitimate entitlement.

In August, Apple opened a chink in the industry’s armour, announcing that it would be adding new features to its iOS operating system that were designed to combat child sexual exploitation and the distribution of abuse imagery. The most controversial measure scans photos on an iPhone, compares them with a database of known child sexual abuse material (CSAM) and notifies Apple if a match is found. The technology is known as client-side scanning or CSS.

Powerful forces in government and the tech industry are now lobbying hard for CSS to become mandatory on all smartphones. Their argument is that instead of weakening encryption or providing law enforcement with backdoor keys, CSS would enable on-device analysis of data in the clear (ie before it becomes encrypted by an app such as WhatsApp or iMessage). If targeted information were detected, its existence and, potentially, its source would be revealed to the agencies; otherwise, little or no information would leave the client device.

CSS evangelists claim that it’s a win-win proposition: providing a solution to the encryption v public safety debate by offering privacy (unimpeded end-to-end encryption) and the ability to successfully investigate serious crime. What’s not to like? Plenty, says an academic paper by some of the world’s leading computer security experts published last week.

The drive behind the CSS lobbying is that the scanning software be installed on all smartphones rather than installed covertly on the devices of suspects or by court order on those of ex-offenders. Such universal deployment would threaten the security of law-abiding citizens as well as lawbreakers. And even though CSS still allows end-to-end encryption, this is moot if the message has already been scanned for targeted content before it was dispatched. Similarly, while Apple’s implementation of the technology simply scans for images, it doesn’t take much to imagine political regimes scanning text for names, memes, political views and so on.

In reality, CSS is a technology for what in the security world is called “bulk interception”. Because it would give government agencies access to private content, it should really be treated like wiretapping and regulated accordingly. And in jurisdictions where bulk interception is already prohibited, bulk CSS should be prohibited as well.

In the longer view of the evolution of digital technology, though, CSS is just the latest step in the inexorable intrusion of surveillance devices into our lives. The trend that started with reading our emails, moved on to logging our searches and our browsing clickstreams, mining our online activity to create profiles for targeting advertising at us and using facial recognition to allow us into our offices now continues by breaching the home with “smart” devices relaying everything back to motherships in the “cloud” and, if CSS were to be sanctioned, penetrating right into our pockets, purses and handbags. That leaves only one remaining barrier: the human skull. But, rest assured, Elon Musk undoubtedly has a plan for that too.

What I’ve been reading

Wheels within wheels
I’m not an indoor cyclist but if I were, The Counterintuitive Mechanics of Peloton Addiction, a confessional blogpost by Anne Helen Petersen, might give me pause.

Get out of here
The Last Days of Intervention is a long and thoughtful essay in Foreign Affairs by Rory Stewart, one of the few British politicians who always talked sense about Afghanistan.

The insider
Blowing the Whistle on Facebook Is Just the First Step is a bracing piece by Maria Farrell in the Conversationalist about the Facebook whistleblower.

Source link

Continue Reading

Technology

Criminals use fake AI voice to swindle UAE bank out of $35m • The Register

Voice Of EU

Published

on

In brief Authorities in the United Arab Emirates have requested the US Department of Justice’s help in probing a case involving a bank manager who was swindled into transferring $35m to criminals by someone using a fake AI-generated voice.

The employee received a call to move the company-owned funds by someone purporting to be a director from the business. He also previously saw emails that showed the company was planning to use the money for an acquisition, and had hired a lawyer to coordinate the process. When the sham director instructed him to transfer the money, he did so thinking it was a legitimate request.

But it was all a scam, according to US court documents reported by Forbes. The criminals used “deep voice technology to simulate the voice of the director,” it said. Now officials from the UAE have asked the DoJ to hand over details of two US bank accounts, where over $400,000 from the stolen money were deposited.

Investigators believe there are at least 17 people involved in the heist.

AI systems need to see the human perspective

Facebook has teamed up with 13 universities across nine countries to compile Ego4D, a dataset containing more than 2,200 hours of video shot in first-person, where 700 participants were filmed performing everyday activities like cooking or playing video games.

The antisocial network is hoping Ego4D will unlock new capabilities in augmented and virtual reality or robotics. New models trained on this data can be tested on a range of tasks, including episodic memory, predicting what happens next, coordinating hand movement to manipulate objects, and social interaction.

“Imagine your AR device displaying exactly how to hold the sticks during a drum lesson, guiding you through a recipe, helping you find your lost keys, or recalling memories as holograms that come to life in front of you,” Facebook said in a blog post.

“Next-generation AI systems will need to learn from an entirely different kind of data – videos that show the world from the center of the action, rather than the sidelines,” added Kristen Grauman, lead research scientist at Facebook.

Researchers will have access to Ego4D later next month subject to a data use agreement.

Microsoft Translator’s AI software

Microsoft Translator, language translation software powered by neural networks, can now translate over 100 different languages.

Twelve new languages and dialects were added to Microsoft Translator this week, including: endangered ones like Bashkir spoken by a Kipchak Turkic ethnic group indigenous to Russia to more common lingos like Mongolian. Microsoft Translator now supports 103 languages.

“One hundred languages is a good milestone for us to achieve our ambition for everyone to be able to communicate regardless of the language they speak,” said Xuedong Huang, Microsoft technical fellow and Azure AI chief technology officer.

Xuedong said the software is based on a multilingual AI model called Z-code. The system deals with text, and is part of Microsoft’s efforts to build a larger multimodal system capable of handling images, text, and audio dubbed the XYZ-code vision. Microsoft Translator is deployed in a range of services, including search engine Bing and offered as an API on its cloud platform Azure Cognitive Services.

ShotSpotter sues Vice for defamation and wants $300m in damages

The controversial AI gunshot-detection company Shotspotter has sued Vice, claiming its business has been unfairly tarnished by a series of articles published by the news outlet.

“On July 26, 2021, Vice launched a defamatory campaign in which it falsely accused ShotSpotter of conspiring with police to fabricate and alter evidence to frame Black men for crimes they did not commit,” the complaint said.

ShotSpotter accused the publication of portraying the company’s technology and actions inaccurately to “cultivate a ‘subversive’ brand” used to sell products advertised in its “sponsored content”.

The company made headlines when evidence used to try to prove a Black man shot and killed another man in a court trial was retracted. The defense lawyer accused ShotSpotter employees of tampering with the evidence to support the police’s case. Vice allegedly made false claims that the biz routinely used its software to tag loud sounds as gunshots to help law enforcement prosecute innocent suspects in shooting cases.

When Vice’s journalists were given proof to show that wasn’t the case, they refused to correct their factual inaccuracies, the lawsuit claimed. ShotSpotter argued the articles had ruined its reputation and now it wants Vice to cough up a whopping $300m in damages.

State of AI 2021

The annual State of AI report is out, compiled by two British tech investors, recapping this year’s trends and developments in AI.

The fourth report from Nathan Benaich, a VC at Air Street Capital, and Ian Hogarth, co-founder of music app Songkick and an angel investor, focuses on transformers, a type of machine learning architecture best known for powering giant language models like OpenAI’s GPT-3 or Google’s BERT.

Transformers aren’t just useful for generating text; they’ve proven adept in other areas, like computer vision or biology too. Machine learning technology is also continuing to mature – developers are deploying more systems to tackle real-world problems such as optimising energy through national electric grids or warehouse logistics for supermarkets.

That also applies to military applications, the pair warned. “AI researchers have traditionally seen the AI arms race as a figurative one – simulated dogfights between competing AI systems carried out in labs – but that is changing with reports of recent use of autonomous weapons by various militaries.”

You can read the full report here. ®

Source link

Continue Reading

Technology

One in 10 Irish workers would not report malware to boss, survey finds

Voice Of EU

Published

on

The Auxilion report also found that almost one in four employees use work video conferencing accounts to connect with family and friends.

A survey of more than 500 Irish workers has found that more than one in 10 (12pc) employees would not immediately inform their employer of malware detected on their work device.

The Censuswide survey commissioned by Dublin-based IT company Auxilion in June and published yesterday (14 October) also found that almost a third of workers have clicked on an email link or attachment from an unknown source.

The report highlighted the cybersecurity challenges faced by the Irish workforce working from home or in a hybrid set-up during the pandemic. Earlier this year, another Auxilion survey found that remote collaboration issues could cost Irish firms €3.3bn a year.

While 82pc of respondents said they were confident in their ability to identify phishing attempts, more than a third (37pc) cited suspicious emails as their top security concern while working remotely.

This was followed by using home Wi-Fi (30pc) and hackers accessing webcams (29pc), taking second and third place on the list of concerns. The incidence of scam calls and lack of in-person tech support were also among the top five concerns of Irish workers.

‘This has created an overreliance on email which, while understandable, is actually stifling innovation’
– DONAL SULLIVAN

Auxilion CTO Donal Sullivan said that the pandemic has prompted people to come up with new ways of communicating and collaborating while spread across distant locations.

“This has created an overreliance on email which, while understandable, is actually stifling innovation, productivity and collaboration,” he said.

More than half (55pc) of respondents said that they were suspicious of sharing sensitive information on video calling platforms, while one in five (21pc) were concerned about using personal devices for work.

On the flip side, almost one in four (23pc) Irish workers use video platform accounts provided by work to socialise with family and friends. A quarter of respondents also use the same password for work and personal accounts – exposing them to risk.

“There are inherent security risks associated with email communications and it is evident that not only are Irish office workers concerned, they are vulnerable to cyber threats,” Sullivan added.

“In fact, our research shows that both employees and organisations need to raise the game when it comes to security, control and governance.”

Irish workers are, however, optimistic about the security measures at their workplace, with 83pc noting their protection levels as adequate. More than eight in 10 employees also trust their organisations to securely protect confidential data.

Sullivan said that business leaders need to make sure they have the right tools and processes in place which can enable people to do their jobs while safeguarding data and systems.

“This includes adequate training and awareness as well, coupled with an openness to flag issues and be transparent if a breach occurs,” he said.

“Failing to take these steps could prove very costly, both financially and reputationally.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!