Connect with us

Technology

Exclusive: LAPD partnered with tech firm that enables secretive online spying | US policing

Voice Of EU

Published

on

The Los Angeles police department pursued a contract with a controversial technology company that could enable police to use fake social media accounts to surveil civilians and claimed its algorithms can identify people who may commit crimes in the future.

A cache of internal LAPD documents obtained through public records requests by the Brennan Center for Justice, a non-profit organization, and shared with the Guardian, reveal that LAPD in 2019 trialed social media surveillance software from the analytics company Voyager Labs.

Like many companies in this industry, Voyager Labs’ software allows law enforcement to collect and analyze large troves of social media data to investigate crimes or monitor potential threats.

But documents reveal the company takes this surveillance a step further. In its sales pitch to LAPD about a potential long-term contract, Voyager said its software could collect data on a suspect’s online network and surveil the accounts of thousands of the suspect’s “friends”. It said its artificial intelligence could discern people’s motives and beliefs and identify social media users who are most “engaged in their hearts” about their ideologies. And it suggested its tools could allow agencies to conduct undercover monitoring using fake social media profiles.

police officers outside headquarters
LAPD trialed Voyager’s software in 2019. Photograph: Al Seib/Los Angeles Times/Rex/Shutterstock

The LAPD’s trial with Voyager ended in November 2019. The records show the department continued to access some of the technology after the pilot period, and that the LAPD and Voyager spent more than a year trying to finalize a formal contract. The documents show that the LAPD has had ongoing conversations this year about a continued partnership, but a police spokesperson told the Guardian on Monday that the department was not currently using Voyager.

The LAPD declined to respond to detailed and repeated inquiries on its trial with Voyager and its conversations about a potential long-term contract, as well as questions about its use of social media surveillance software.

The department has said in the past that social media can be critical for investigations and for “situational awareness” in monitoring major events for potential public safety issues. The city has seen large demonstrations in recent years, as well as clashes between activists over issues such as vaccination requirements.

But experts who reviewed the documents for the Guardian say they raise concerns about the LAPD’s pursuit of ethically questionable software. The department’s surveillance technology could be violating civilians’ free speech and privacy rights, the experts say, while facilitating racial profiling.

table of contents

The full scope of the LAPD’s surveillance tech is unclear, though records suggest that the department has in recent years purchased or considered buying software from at least 10 companies that monitor social media. The department is often a trailblazer among US police departments in adopting new technologies, with a large police budget and private foundation funding that allows it to trial programs later adopted by other departments.

The concerns come after the Guardian recently revealed that the LAPD has been directing officers to broadly collect social media information of civilians they stop and question, including people who are not cited or arrested, and amid growing scrutiny of the department’s surveillance and “predictive policing” practices.

‘Bigotry embedded in code’

Voyager – registered as Bionic 8 Analytics – gave the LAPD some of its products on a trial basis in the summer and fall of 2019, the records show.

The documents don’t make clear what suite of tools the LAPD had access to during the trial or whether the department used some of the company’s more controversial features. But a report the company produced for the LAPD during this period says the department used the company’s software to investigate more than 500 social media profiles and to analyze thousands of messages. The redacted report said the LAPD had used the software for “real-time tactical intelligence”; “protective intelligence” for “VIPs” in local government and in the LAPD; and cases related to gangs, homicides and hate groups. An unnamed the LAPD investigator was quoted in the report as saying Voyager helped the department “identify a few new targets”.

screenshot shows large blue ball of dots labeled 'NYC connections' along with boxes containing location information
A screen shot from Voyager proposals and pitches that the company shared with LAPD. Photograph: LAPD records via the Brennan Center

In internal messages about the pilot in 2019, the LAPD said Voyager was especially helpful in analyzing social media data obtained through warrants and in investigating online networks of “street gangs”.

Communications between Voyager and the LAPD after the trial ended and when the company was trying to sell the department on its products reveal more about the firm’s purported capabilities, claims experts said were bold and troubling.

In the spring of 2020, while pitching a contract, Voyager provided the LAPD with case studies illustrating how the software had been used.

In one example, the company said its software had been used to investigate a Muslim Brotherhood activist in New York City who allegedly made a video encouraging people to intentionally spread Covid to Egyptian government officials in March 2020.

A Voyager representative told the LAPD the investigation was conducted for “federal and local agencies” but did not name the clients or specify whether the threat had turned out to be legitimate. But Voyager said its software was able to collect and analyze thousands of the activist’s social media posts and had scooped up data on 4,000 of his “friends”.

Voyager also said its software was able to discern which social media users caught up in the search were “top connections” of the activist and that it could determine who was based in New York and who worked for a government agency. The company claimed the software could also discern which of the accounts showed an “affinity” for “violent, radical ideologies” based on “indirect connections” to “extremist accounts”, appearing to refer to friends of friends.

In another presentation, Voyager suggested its software could not only collect large amounts of social media data but that its “artificial intelligence” could discern people’s beliefs.

Voyager showed LAPD how its software could have been used to investigate an alleged terrorist attack, analyzing the case of Adam Alsahli – a man killed after he opened fire at the Corpus Christi naval base in May 2020. Pointing to the man’s social media activity, Voyager claimed its AI could ascertain people’s “affinity for Islamic fundamentalism or extremism”. The company cited the shooter’s “pictures with Islamic themes” and said his Instagram accounts showed “his pride in and identification with his Arab heritage”. The company said its AI was so effective that its results, produced in minutes, did not “require any intervention or assessments by an analyst or investigator”.

screenshot with heading "rapid risk assessment - connections to islamists or solidarity with islamist content'
Voyager showed the LAPD how its software could have been used to investigate an alleged terrorist attack. Photograph: LAPD records via the Brennan Center

In an October 2020 proposal document, Voyager also said its software could conduct a “sentiment analysis” to discern who was most emotionally invested and had the “passion needed to act on their beliefs”.

Voyager’s monitoring of broad online networks, and its claims about AI, raised red flags for experts.

“There’s a basic ‘guilt by association’ that Voyager seems to really endorse,” said Rachel Levinson-Waldman, a deputy director at the Brennan Center, about Voyager’s New York City case study. “This notion that you can be painted with the ideology of people that you’re not even directly connected to is really disturbing.”

The naval base shooting example was deeply troubling, said Meredith Broussard, a New York University data journalism professor and expert on AI, who reviewed the records for the Guardian. “Just because you have an affinity for Islam does not mean you’re a criminal or a terrorist. That is insulting and racist. It’s religious bigotry embedded in code.”

“This is hyperbolic AI marketing. The more they brag, the less I believe them,” said Cathy O’Neil, a data scientist and algorithmic auditor, arguing that the firm’s broad claims were not based in legitimate science and were unachievable: “They’re saying, ‘We can see if somebody has criminal intent.’ No, you can’t. Even people who commit crimes can’t always tell they have criminal intent.”

The consequences of this pseudoscience can be dire, she added: “Claims of accuracy don’t have to actually be true for the algorithm to be used as a weapon.”

Concerns about undercover spying

The documents show Voyager and LAPD officers also discussed some of the company’s most controversial proposals. In an October 2020 letter to the LAPD outlining details of a potential contract, Voyager claimed its social media monitoring was “traceless”, saying that the social media companies themselves would not be able to tell that LAPD was behind the surveillance.

In an earlier report to LAPD in 2019, Voyager said it was developing software to spy on WhatsApp groups using an “active persona mechanism”, or “avatar”, suggesting that police would create a fake account to collect information from a group.

In one September 2019 email to a Voyager sales representative, an LAPD technology official said the feature that allows police to “log in with fake accounts that are already friended with the target subject” was a “great function”, but added that the department was not heading in the direction of using that service.

It’s unclear if the LAPD ever used the fake account feature. In another September 2019 email, an LAPD official in the robbery and homicide division told Voyager that the “avatars” function was a “need-to-have” feature. And Voyager said in one document that some LAPD staffers piloting its services had requested the “active persona” feature for Facebook, Instagram and Telegram.

This feature could violate the policies of Facebook, which prohibits fake accounts and has previously deactivated users that it determined were police officers impersonating civilians. A Facebook spokesperson said members of law enforcement, like all users, were required to use their real names on their profiles.

screenshot describes company's 'active persona' feature
Voyager said it was developing software to spy on WhatsApp groups using an ‘active persona mechanism’. Photograph: LAPD records via the Brennan Center

“As stated in our terms of services, misrepresentations and impersonations are not allowed on our services and we take action when we find violating activity,” a Facebook spokesperson, Sally Aldous, said in a statement.

Using fake accounts to monitor activists online was equivalent to undercover spying, civil rights advocates said.

LAPD has policies for “online undercover activity” that establishes some restrictions for this tactic, including requiring special approval from a supervisor if police are using a fake account to communicate with someone, but there is less oversight if an account is created to examine “trends” or for “conducting research”.

John Hamasaki, a criminal defense lawyer and member of the San Francisco police commission, said some police departments were updating policies to restrict the use of fake accounts in an effort to protect free speech. In San Francisco, he said, police would be barred from using a company like Voyager for broad surveillance of online networks. The type of predictive policing software that Voyager advertises is also strictly prohibited in Oakland, according to the city’s privacy commission.

“The problem with these types of surveillance operations is they’re often not based on reasonable suspicion or probable cause,” he said. “Instead, it is casting a broad net.”

Levinson-Waldman of the Brennan Center said it was unclear how widespread this kind of surveillance was in police departments across the US. She noted that while law enforcement departments were increasingly relying on social media in investigations, there was often little transparency.

The LAPD and the New York police department have two of the largest police budgets in the country and have a long history of piloting cutting-edge technology that ends up being ineffective or harmful, said Broussard, the author of Artificial Unintelligence: How Computers Misunderstand the World.

Even when the LAPD or NYPD cease using certain products, the companies end up bringing their tech elsewhere, she said: “The companies still want to sell software, so they go after smaller police forces that have even less capacity to evaluate the efficacy of these snake-oil software systems.”

Recent reporting has shown how the LAPD has used surveillance technology similar to Voyager’s to monitor Black Lives Matter organizing, and the department also recently said it was pursuing this kind of technology for “information gathering” in a report about reforms since the George Floyd protests. The LAPD did not respond to questions from the Guardian about whether Voyager was used for monitoring protesters.

In a report in September of this year, the department said it was “currently using” Voyager software and seeking $450,000 to purchase additional Voyager technology. But an LAPD spokesperson said this week that the department was not using the company’s software at the moment. She did not respond to questions about when LAPD ceased using the services and if the department was still pursuing a partnership.

Voyager declined to comment on its work with the LAPD and did not answer specific questions about its services. A spokesperson, Lital Carter Rosenne, said its clients were responsible for building databases and running the software, adding: “As a company, we follow the laws of all the countries in which we do business. We also have confidence that those with whom we do business are law-abiding public and private organizations.”

LA activists said the revelations raised serious concerns about how the tech could be used against groups that protest against the LAPD. “I’m really astounded that not only is LAPD using these companies, but that there are these tactics which feel very much like digital infiltration,” said Dr Melina Abdullah, a co-founder of Black Lives Matter LA. “It demonstrates that our fears are true.”

Abdullah, who had not heard of Voyager Labs, said she was particularly disturbed to learn about potential monitoring of WhatsApp groups: “We know that our public posts are monitored. But when they’re engaging in additional digging into private posts, that is supposed to be a more secure way of communicating.”

Source link

Technology

Edwards Lifesciences is hiring at its ‘key’ Shannon and Limerick facilities

Voice Of EU

Published

on

The medtech company is hiring for a variety of roles at both its Limerick and Shannon sites, the latter of which is being transformed into a specialised manufacturing facility.

Medical devices giant Edwards Lifesciences began renovations to convert its existing Shannon facility into a specialised manufacturing centre at the end of July.

The expansion will allow the company to produce components that are an integral part of its transcatheter heart valves. The conversion is part of Edwards Lifesciences’ expansion plan that will see it hire for hundreds of new roles in the coming years.

“The expanded capability at our Shannon facility demonstrates that our operations in Ireland are a key enabler for Edwards to continue helping patients across the globe,” said Andrew Walls, general manager for the company’s manufacturing facilities in Ireland.

According to Walls, hiring is currently underway at the company’s Shannon and Limerick facilities for a variety of functions such as assembly and inspection roles, manufacturing and quality engineering, supply chain, warehouse operations and project management.

Why Ireland?

Headquartered in Irvine, California, Edwards Lifesciences established its operations in Shannon in 2018 and announced 600 new jobs for the mid-west region. This number was then doubled a year later when it revealed increased investment in Limerick.

When the Limerick plant was officially opened in October 2021, the medtech company added another 250 roles onto the previously announced 600, promising 850 new jobs by 2025.

“As the company grows and serves even more patients around the world, Edwards conducted a thorough review of its global valve manufacturing network to ensure we have the right facilities and talent to address our future needs,” Walls told SiliconRepublic.com

“We consider multiple factors when determining where we decide to manufacture – for example, a location that will allow us to produce close to where products are utilised, a location that offers advantages for our supply chain, excellent local talent pool for an engaged workforce, an interest in education and good academic infrastructure, and other characteristics that will be good for business and, ultimately, good for patients.

“Both our Shannon and Limerick sites are key enablers for Edwards Lifesciences to continue helping patients across the globe.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Technology

Meta’s new AI chatbot can’t stop bashing Facebook | Meta

Voice Of EU

Published

on

If you’re worried that artificial intelligence is getting too smart, talking to Meta’s AI chatbot might make you feel better.

Launched on Friday, BlenderBot is a prototype of Meta’s conversational AI, which, according to Facebook’s parent company, can converse on nearly any topic. On the demo website, members of the public are invited to chat with the tool and share feedback with developers. The results thus far, writers at Buzzfeed and Vice have pointed out, have been rather interesting.

Asked about Mark Zuckerberg, the bot told BuzzFeed’s Max Woolf that “he is a good businessman, but his business practices are not always ethical. It is funny that he has all this money and still wears the same clothes!”

The bot has also made clear that it’s not a Facebook user, telling Vice’s Janus Rose that it had deleted its account after learning about the company’s privacy scandals. “Since deleting Facebook my life has been much better,” it said.

The bot repeats material it finds on the internet, and it’s very transparent about this: you can click on its responses to learn where it picked up whatever claims it is making (though it is not always specific).

This means that along with uncomfortable truths about its parent company, BlenderBot has been spouting predictable falsehoods. In conversation with Jeff Horwitz of the Wall Street Journal, it insisted Donald Trump was still president and would continue to be “even after his second term ends in 2024”. (It added another dig at Meta, saying Facebook “has a lot of fake news on it these days”.) Users have also recorded it making antisemitic claims.

BlenderBot’s remarks were foreseeable based on the behavior of older chatbots such as Microsoft’s Tay, which Twitter users quickly taught to be a racist conspiracy theorist, forcing the company to apologize for its “wildly inappropriate and reprehensible words and images”. GPT-3, another AI system, has also delivered racist, misogynist and homophobic remarks. A South Korean startup’s chatbot, designed to resemble a 20-year-old college student, had to be suspended after it rattled off racial slurs and anti-LGBTQ+ remarks.

Given the cesspool that is the internet, Meta appears to have expected similar things from its own service. Before logging on, users must acknowledge that BlenderBot is “likely to make untrue or offensive statements”. As Vice notes, Meta researchers have described the AI tech behind the bot as having “a high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt”.

“Everyone who uses Blender Bot is required to acknowledge they understand it’s for research and entertainment purposes only, that it can make untrue or offensive statements, and that they agree to not intentionally trigger the bot to make offensive statements,” said a Meta spokesperson in a statement.

My own conversation with BlenderBot did not veer into that territory, but it did highlight another flaw in the service: its utter inanity.

The bot began by asking me what subject I liked in school. The bot is open about which “persona” it is using in the conversation; our discussion involved a persona it described as “I like to read. I went to college.” When I asked it tough questions, such as which colleges accepted AI bot students, it offered nonsensical replies (“in some ways, aren’t we all chatbots?” No) and offered further reading by saying, “Here’s the link”, without providing a link. It also kept steering the conversation back to chatbots.

It did, however, maintain its firm stance against its own creators. When I said I wasn’t a fan of Facebook’s ethics, it wholeheartedly agreed: “Me too! That is why I stopped using it and just stick with reading books instead.”



Source link

Continue Reading

Technology

Midwest universities unite to support US chip industry • The Register

Voice Of EU

Published

on

A dozen US midwestern research colleges and universities have signed up to a project intended to bolster the semiconductor and microelectronics industries with combined research and education to ensure work for their students in high-tech industries.

The “Midwest Regional Network to Address National Needs in Semiconductor and Microelectronics” consists of a dozen institutions, made up of eight from Ohio, two from Michigan, and two from Indiana. Their stated aim is to support the onshoring efforts of the US semiconductor industry by addressing the need for research and a skilled workforce.

According to Wright State University, the network was formed in response to Intel’s announcement that it planned to build two chip factories near Columbus, Ohio, and followed a two-day workshop in April hosted by the state.

Those plans, revealed in January, are to build at least two semiconductor manufacturing plants on a 1,000-acre site, with the potential to expand to 2,000 acres and eight fabs.

At the time, Intel CEO Pat Gelsinger said he expected it to become the largest silicon manufacturing location on the planet. Construction started on the site at the beginning of July.

However, the university network was also formed to help address the broader national effort to regain American leadership in semiconductors and microelectronics, or at least bring some of it back onshore and make the US less reliant on supplies of chips manufactured abroad.

Apart from Wright State University, the 12 institutions involved in the network are: Columbus State Community College, Lorain County Community College, Michigan State University, Ohio State University, Purdue University, Sinclair Community College, University of Cincinnati, University of Dayton, University of Michigan, and the University of Notre Dame, Indiana.

The president of each institution has signed a memorandum of understanding to form the network, and the expectation is that the group will expand to include more than these dozen initial members.

The intention is that the institutions taking part will be able to make use of each other’s existing research, learning programs, capabilities, and expertise in order to boost their collective ability to support the semiconductor and microelectronics industry ecosystems.

Challenges for the network include developing mechanisms to connect existing research, and training assets across the region, and developing a common information sharing platform to make it easier to identify opportunities for joint programming and research across the network.

University of Cincinnati chief innovation officer David J Adams called the announcement a game-changer. “This highly innovative approach illustrates that we’re all in this together when it comes to meeting industry workforce and research needs,” Adams wrote in a posting on the University of Cincinnati website.

The move follows the long-awaited passage of the $280 billion CHIPS and Science Act at the end of last month, of which $52 billion of the total spend is expected to go towards subsidizing the building of semiconductor plants such as Intel’s, and boosting research and development of chip technology. ®

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!