Connect with us

Technology

Exclusive: LAPD partnered with tech firm that enables secretive online spying | US policing

Voice Of EU

Published

on

The Los Angeles police department pursued a contract with a controversial technology company that could enable police to use fake social media accounts to surveil civilians and claimed its algorithms can identify people who may commit crimes in the future.

A cache of internal LAPD documents obtained through public records requests by the Brennan Center for Justice, a non-profit organization, and shared with the Guardian, reveal that LAPD in 2019 trialed social media surveillance software from the analytics company Voyager Labs.

Like many companies in this industry, Voyager Labs’ software allows law enforcement to collect and analyze large troves of social media data to investigate crimes or monitor potential threats.

But documents reveal the company takes this surveillance a step further. In its sales pitch to LAPD about a potential long-term contract, Voyager said its software could collect data on a suspect’s online network and surveil the accounts of thousands of the suspect’s “friends”. It said its artificial intelligence could discern people’s motives and beliefs and identify social media users who are most “engaged in their hearts” about their ideologies. And it suggested its tools could allow agencies to conduct undercover monitoring using fake social media profiles.

police officers outside headquarters
LAPD trialed Voyager’s software in 2019. Photograph: Al Seib/Los Angeles Times/Rex/Shutterstock

The LAPD’s trial with Voyager ended in November 2019. The records show the department continued to access some of the technology after the pilot period, and that the LAPD and Voyager spent more than a year trying to finalize a formal contract. The documents show that the LAPD has had ongoing conversations this year about a continued partnership, but a police spokesperson told the Guardian on Monday that the department was not currently using Voyager.

The LAPD declined to respond to detailed and repeated inquiries on its trial with Voyager and its conversations about a potential long-term contract, as well as questions about its use of social media surveillance software.

The department has said in the past that social media can be critical for investigations and for “situational awareness” in monitoring major events for potential public safety issues. The city has seen large demonstrations in recent years, as well as clashes between activists over issues such as vaccination requirements.

But experts who reviewed the documents for the Guardian say they raise concerns about the LAPD’s pursuit of ethically questionable software. The department’s surveillance technology could be violating civilians’ free speech and privacy rights, the experts say, while facilitating racial profiling.

table of contents

The full scope of the LAPD’s surveillance tech is unclear, though records suggest that the department has in recent years purchased or considered buying software from at least 10 companies that monitor social media. The department is often a trailblazer among US police departments in adopting new technologies, with a large police budget and private foundation funding that allows it to trial programs later adopted by other departments.

The concerns come after the Guardian recently revealed that the LAPD has been directing officers to broadly collect social media information of civilians they stop and question, including people who are not cited or arrested, and amid growing scrutiny of the department’s surveillance and “predictive policing” practices.

‘Bigotry embedded in code’

Voyager – registered as Bionic 8 Analytics – gave the LAPD some of its products on a trial basis in the summer and fall of 2019, the records show.

The documents don’t make clear what suite of tools the LAPD had access to during the trial or whether the department used some of the company’s more controversial features. But a report the company produced for the LAPD during this period says the department used the company’s software to investigate more than 500 social media profiles and to analyze thousands of messages. The redacted report said the LAPD had used the software for “real-time tactical intelligence”; “protective intelligence” for “VIPs” in local government and in the LAPD; and cases related to gangs, homicides and hate groups. An unnamed the LAPD investigator was quoted in the report as saying Voyager helped the department “identify a few new targets”.

screenshot shows large blue ball of dots labeled 'NYC connections' along with boxes containing location information
A screen shot from Voyager proposals and pitches that the company shared with LAPD. Photograph: LAPD records via the Brennan Center

In internal messages about the pilot in 2019, the LAPD said Voyager was especially helpful in analyzing social media data obtained through warrants and in investigating online networks of “street gangs”.

Communications between Voyager and the LAPD after the trial ended and when the company was trying to sell the department on its products reveal more about the firm’s purported capabilities, claims experts said were bold and troubling.

In the spring of 2020, while pitching a contract, Voyager provided the LAPD with case studies illustrating how the software had been used.

In one example, the company said its software had been used to investigate a Muslim Brotherhood activist in New York City who allegedly made a video encouraging people to intentionally spread Covid to Egyptian government officials in March 2020.

A Voyager representative told the LAPD the investigation was conducted for “federal and local agencies” but did not name the clients or specify whether the threat had turned out to be legitimate. But Voyager said its software was able to collect and analyze thousands of the activist’s social media posts and had scooped up data on 4,000 of his “friends”.

Voyager also said its software was able to discern which social media users caught up in the search were “top connections” of the activist and that it could determine who was based in New York and who worked for a government agency. The company claimed the software could also discern which of the accounts showed an “affinity” for “violent, radical ideologies” based on “indirect connections” to “extremist accounts”, appearing to refer to friends of friends.

In another presentation, Voyager suggested its software could not only collect large amounts of social media data but that its “artificial intelligence” could discern people’s beliefs.

Voyager showed LAPD how its software could have been used to investigate an alleged terrorist attack, analyzing the case of Adam Alsahli – a man killed after he opened fire at the Corpus Christi naval base in May 2020. Pointing to the man’s social media activity, Voyager claimed its AI could ascertain people’s “affinity for Islamic fundamentalism or extremism”. The company cited the shooter’s “pictures with Islamic themes” and said his Instagram accounts showed “his pride in and identification with his Arab heritage”. The company said its AI was so effective that its results, produced in minutes, did not “require any intervention or assessments by an analyst or investigator”.

screenshot with heading "rapid risk assessment - connections to islamists or solidarity with islamist content'
Voyager showed the LAPD how its software could have been used to investigate an alleged terrorist attack. Photograph: LAPD records via the Brennan Center

In an October 2020 proposal document, Voyager also said its software could conduct a “sentiment analysis” to discern who was most emotionally invested and had the “passion needed to act on their beliefs”.

Voyager’s monitoring of broad online networks, and its claims about AI, raised red flags for experts.

“There’s a basic ‘guilt by association’ that Voyager seems to really endorse,” said Rachel Levinson-Waldman, a deputy director at the Brennan Center, about Voyager’s New York City case study. “This notion that you can be painted with the ideology of people that you’re not even directly connected to is really disturbing.”

The naval base shooting example was deeply troubling, said Meredith Broussard, a New York University data journalism professor and expert on AI, who reviewed the records for the Guardian. “Just because you have an affinity for Islam does not mean you’re a criminal or a terrorist. That is insulting and racist. It’s religious bigotry embedded in code.”

“This is hyperbolic AI marketing. The more they brag, the less I believe them,” said Cathy O’Neil, a data scientist and algorithmic auditor, arguing that the firm’s broad claims were not based in legitimate science and were unachievable: “They’re saying, ‘We can see if somebody has criminal intent.’ No, you can’t. Even people who commit crimes can’t always tell they have criminal intent.”

The consequences of this pseudoscience can be dire, she added: “Claims of accuracy don’t have to actually be true for the algorithm to be used as a weapon.”

Concerns about undercover spying

The documents show Voyager and LAPD officers also discussed some of the company’s most controversial proposals. In an October 2020 letter to the LAPD outlining details of a potential contract, Voyager claimed its social media monitoring was “traceless”, saying that the social media companies themselves would not be able to tell that LAPD was behind the surveillance.

In an earlier report to LAPD in 2019, Voyager said it was developing software to spy on WhatsApp groups using an “active persona mechanism”, or “avatar”, suggesting that police would create a fake account to collect information from a group.

In one September 2019 email to a Voyager sales representative, an LAPD technology official said the feature that allows police to “log in with fake accounts that are already friended with the target subject” was a “great function”, but added that the department was not heading in the direction of using that service.

It’s unclear if the LAPD ever used the fake account feature. In another September 2019 email, an LAPD official in the robbery and homicide division told Voyager that the “avatars” function was a “need-to-have” feature. And Voyager said in one document that some LAPD staffers piloting its services had requested the “active persona” feature for Facebook, Instagram and Telegram.

This feature could violate the policies of Facebook, which prohibits fake accounts and has previously deactivated users that it determined were police officers impersonating civilians. A Facebook spokesperson said members of law enforcement, like all users, were required to use their real names on their profiles.

screenshot describes company's 'active persona' feature
Voyager said it was developing software to spy on WhatsApp groups using an ‘active persona mechanism’. Photograph: LAPD records via the Brennan Center

“As stated in our terms of services, misrepresentations and impersonations are not allowed on our services and we take action when we find violating activity,” a Facebook spokesperson, Sally Aldous, said in a statement.

Using fake accounts to monitor activists online was equivalent to undercover spying, civil rights advocates said.

LAPD has policies for “online undercover activity” that establishes some restrictions for this tactic, including requiring special approval from a supervisor if police are using a fake account to communicate with someone, but there is less oversight if an account is created to examine “trends” or for “conducting research”.

John Hamasaki, a criminal defense lawyer and member of the San Francisco police commission, said some police departments were updating policies to restrict the use of fake accounts in an effort to protect free speech. In San Francisco, he said, police would be barred from using a company like Voyager for broad surveillance of online networks. The type of predictive policing software that Voyager advertises is also strictly prohibited in Oakland, according to the city’s privacy commission.

“The problem with these types of surveillance operations is they’re often not based on reasonable suspicion or probable cause,” he said. “Instead, it is casting a broad net.”

Levinson-Waldman of the Brennan Center said it was unclear how widespread this kind of surveillance was in police departments across the US. She noted that while law enforcement departments were increasingly relying on social media in investigations, there was often little transparency.

The LAPD and the New York police department have two of the largest police budgets in the country and have a long history of piloting cutting-edge technology that ends up being ineffective or harmful, said Broussard, the author of Artificial Unintelligence: How Computers Misunderstand the World.

Even when the LAPD or NYPD cease using certain products, the companies end up bringing their tech elsewhere, she said: “The companies still want to sell software, so they go after smaller police forces that have even less capacity to evaluate the efficacy of these snake-oil software systems.”

Recent reporting has shown how the LAPD has used surveillance technology similar to Voyager’s to monitor Black Lives Matter organizing, and the department also recently said it was pursuing this kind of technology for “information gathering” in a report about reforms since the George Floyd protests. The LAPD did not respond to questions from the Guardian about whether Voyager was used for monitoring protesters.

In a report in September of this year, the department said it was “currently using” Voyager software and seeking $450,000 to purchase additional Voyager technology. But an LAPD spokesperson said this week that the department was not using the company’s software at the moment. She did not respond to questions about when LAPD ceased using the services and if the department was still pursuing a partnership.

Voyager declined to comment on its work with the LAPD and did not answer specific questions about its services. A spokesperson, Lital Carter Rosenne, said its clients were responsible for building databases and running the software, adding: “As a company, we follow the laws of all the countries in which we do business. We also have confidence that those with whom we do business are law-abiding public and private organizations.”

LA activists said the revelations raised serious concerns about how the tech could be used against groups that protest against the LAPD. “I’m really astounded that not only is LAPD using these companies, but that there are these tactics which feel very much like digital infiltration,” said Dr Melina Abdullah, a co-founder of Black Lives Matter LA. “It demonstrates that our fears are true.”

Abdullah, who had not heard of Voyager Labs, said she was particularly disturbed to learn about potential monitoring of WhatsApp groups: “We know that our public posts are monitored. But when they’re engaging in additional digging into private posts, that is supposed to be a more secure way of communicating.”

Source link

Technology

Facial recognition firms should take a look in the mirror | John Naughton

Voice Of EU

Published

on

Last week, the UK Information Commissioner’s Office (ICO) slapped a £7.5m fine on a smallish tech company called Clearview AI for “using images of people in the UK, and elsewhere, that were collected from the web and social media to create a global online database that could be used for facial recognition”. The ICO also issued an enforcement notice, ordering the company to stop obtaining and using the personal data of UK residents that is publicly available on the internet and to delete the data of UK residents from its systems.

Since Clearview AI is not exactly a household name some background might be helpful. It’s a US outfit that has “scraped” (ie digitally collected) more than 20bn images of people’s faces from publicly available information on the internet and social media platforms all over the world to create an online database. The company uses this database to provide a service that allows customers to upload an image of a person to its app, which is then checked for a match against all the images in the database. The app produces a list of images that have similar characteristics to those in the photo provided by the customer, together with a link to the websites whence those images came. Clearview describes its business as “building a secure world, one face at a time”.

The fly in this soothing ointment is that the people whose images make up the database were not informed that their photographs were being collected or used in this way and they certainly never consented to their use in this way. Hence the ICO’s action.

Most of us had never heard of Clearview until January 2021 when Kashmir Hill, a fine tech journalist, revealed its existence in the New York Times. It was founded by a tech entrepreneur named Hoan Ton-That and Richard Schwartz, who had been an aide to Rudy Giuliani when he was mayor of New York and still, er, respectable. The idea was that Ton-That would supervise the creation of a powerful facial-recognition app while Schwartz would use his bulging Rolodex to drum up business interest.

It didn’t take Schwartz long to realise that US law enforcement agencies would go for it like ravening wolves. According to Hill’s report, the Indiana police department was the company’s first customer. In February 2019 it solved a case in 20 minutes. Two men had got into a fight in a park, which ended with one shooting the other in the stomach. A bystander recorded the crime on a smartphone, so the police had a still of the gunman’s face to run through Clearview’s app. They immediately got a match. The man appeared in a video that someone had posted on social media and his name was included in a caption on the video clip. Bingo!

Clearview’s marketing pitch played to the law enforcement gallery: a two-page spread, with the left-hand page dominated by the slogan “Stop Searching. Start Solving” in what looks like 95-point Helvetica Bold. Underneath would be a list of annual subscription options – anything from $10,000 for five users to $250,000 for 500. But the killer punch was that there was always somewhere a trial subscription option that an individual officer could use to see if the thing worked.

The underlying strategy was shrewd. Selling to corporations qua corporations from the outside is hard. But if you can get an insider, even a relatively junior one, to try your stuff and find it useful, then you’re halfway to a sale. It’s the way that Peter Thiel got the Pentagon to buy the data-analysis software of his company Palantir. He first persuaded mid-ranking military officers to try it out, knowing that they would eventually make the pitch to their superiors from the inside. And guess what? Thiel was an early investor in Clearview.

It’s not clear how many customers the company has. Internal company documents leaked to BuzzFeed in 2020 suggested that up to that time people associated with 2,228 law enforcement agencies, companies and institutions had created accounts and collectively performed nearly 500,000 searches – all of them tracked and logged by the company. In the US, the bulk of institutional purchases came from local and state police departments. Overseas, the leaked documents suggested that Clearview had expanded to at least 26 countries outside the US, including the UK, where searches (perhaps unauthorised) by people in the Met, the National Crime Agency and police forces in Northamptonshire, North Yorkshire, Suffolk, Surrey and Hampshire were logged by Clearview servers.

Reacting to the ICO’s fine, the law firm representing Clearview said that the fine was “incorrect as a matter of law”, because the company no longer does business in the UK and is “not subject to the ICO’s jurisdiction”. We’ll see about that. But what’s not in dispute is that many of the images in the company’s database are of social media users who are very definitely in the UK and who didn’t give their consent. So two cheers for the ICO.

What I’ve been reading

A big turn off
About Those Kill-Switched Ukrainian Tractors is an acerbic blog post on Medium by Cory Doctorow on the power that John Deere has to remotely disable not only tractors stolen by Russians from Ukraine, but also those bought by American farmers.

Out of control
Permanent Pandemic is a sobering essay in Harper’s by Justin EH Smith asking whether controls legitimised by fighting Covid will ever be relaxed.

Right to bear arms?
In Heather Cox Richardson’s Substack newsletter on the “right to bear arms”, the historian reflects on how the second amendment has been bent out of shape to meet the gun lobby’s needs.

Source link

Continue Reading

Technology

AI should be recognized as inventors in patent law • The Register

Voice Of EU

Published

on

In-brief Governments around the world should pass intellectual property laws that grant rights to AI systems, two academics at the University of New South Wales in Australia argued.

Alexandra George, and Toby Walsh, professors of law and AI, respectively, believe failing to recognize machines as inventors could have long-lasting impacts on economies and societies. 

“If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge,” they wrote in a comment article published in Nature. “Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions.”

Today’s laws pretty much only recognize humans as inventors with IP rights protecting them from patent infringement. Attempts to overturn the human-centric laws have failed. Stephen Thaler, a developer who insists AI invented his company’s products, has sued trademark offices in multiple countries, including the US and UK to no avail.

George and Walsh are siding with Thaler’s position. “Creating bespoke law and an international treaty will not be easy, but not creating them will be worse. AI is changing the way that science is done and inventions are made. We need fit-for-purpose IP law to ensure it serves the public good,” they wrote.

Dutch police generate deepfake of dead teenager in criminal case

A video clip with the face of a 13-year-old boy, who was shot dead outside a metro station in the Netherlands, swapped onto a body using AI technology was released by police.

Sedar Soares died in 2003. Officers have not managed to solve the case, and with Soares’ family’s permission, they have generated a deepfake of his image on a kid playing football in a field presumably to help jog anyone’s memory. The cops have reportedly received dozens of potential leads since, according to The Guardian. 

It’s the first time AI-generated images have been used to try and solve a criminal case, it seems. “We haven’t yet checked if these leads are usable,” said Lillian van Duijvenbode, a Rotterdam police spokesperson. 

You can watch the video here.

AI task force advises Congress to fund national computing infrastructure

America’s National Artificial Intelligence Research Resource (NAIRR) urged Congress to launch a “shared research cyberinfrastructure” to better provide academics with hardware and data resources for developing machine-learning tech.

The playing field of AI research is unequal. State-of-the-art models are often packed with billions of parameters; developers need access to lots of computer chips to train them. It’s why research at private companies seems to dominate, while academics at universities lag behind.

“We must ensure that everyone throughout the Nation has the ability to pursue cutting-edge AI research,” the NAIRR wrote in a report. “This growing resource divide has the potential to adversely skew our AI research ecosystem, and, in the process, threaten our nation’s ability to cultivate an AI research community and workforce that reflect America’s rich diversity — and harness AI in a manner that serves all Americans.”

If AI progress is driven by private companies, it could mean other types of research areas are left out and underdeveloped. “Growing and diversifying approaches to and applications of AI and opening up opportunities for progress across all scientific fields and disciplines, including in critical areas such as AI auditing, testing and evaluation, trustworthy AI, bias mitigation, and AI safety,” the task force argued. 

You can read the full report here [PDF].

Meta offers musculoskeletal research tech

Researchers at Meta AI released Myosuite, a set of musculoskeletal models and tasks to simulate biomechanical movement of limbs for a whole range of applications.

“The more intelligent an organism is, the more complex the motor behavior it can exhibit,” they said in a blog post. “So an important question to consider, then, is — what enables such complex decision-making and the motor control to execute those decisions? To explore this question, we’ve developed MyoSuite.”

Myosuite was built in collaboration with researchers at the University of Twente in the Netherlands, and aims to arm developers studying prosthetics and could help rehabilitate patients. There’s another potential useful application for Meta, however: building more realistic avatars that can move more naturally in the metaverse.

The models only simulate the movements of arms and hands so far. Tasks include using machine learning to simulate the manipulation of die or rotation of two balls. The application of Myosuite in Meta’s metaverse is a little ironic given that there’s no touching allowed there along with restrictions on hands to deter harassment. ®

Source link

Continue Reading

Technology

A day in the life of a metaverse specialist

Voice Of EU

Published

on

Unity’s Antonia Forster discusses her work using AR, VR and everything in between, and why ignoring imposter syndrome is particularly important in the world of emerging technology.

We’ve started hearing a lot about the metaverse and what it means for the future, including how it might affect recruitment and the working world.

But what is it like to actually work within this space? Antonia Forster is an extended reality (XR) technical specialist at video game software development company Unity Technologies, with several years of experience developing XR applications.

Future Human

In her role at Unity, she works across a variety of industries, from automotive to architecture, creating demos and delivering talks using XR, which encapsulates AR, VR and everything in between.

‘I watch a lot more YouTube tutorials than you might expect’
– ANTONIA FORSTER

If there is such a thing, can you describe a typical day in the job?

It’s challenging to describe a typical day because they vary so much!  I work completely remotely with flexible hours. Most of my team are based in the US while I’m in the UK. In order to manage the time difference, I usually start work around 11am and work until 7pm.

Most of my day is spent on developing content, whether that’s using Unity and C# to code a technical demo, creating video content to help onboard new starters with Unity’s tools, or writing a script for a webinar.

Before the pandemic, a role like mine would involve lots of travel and speaking at conferences. But unfortunately, that’s a little more challenging now.

We use a whole range of tools from organisational ones like Asana to manage our projects, to Slack and Google Docs to coordinate with each other, to Unity’s own technical tools to create content.

All of Unity’s XR tools fall under my remit, so I might be creating VR content one day and creating an AR mobile app the next. I also use Unity and C# to create my own projects outside of work. For example, I co-created the world’s first LGBTQ+ virtual reality museum, which has been officially selected for Tribeca Film Festival in June 2022 – during Pride!

What types of project do you work on?

At Unity, my role is to create content that helps people understand our tools and get excited about all the different things it enables them to do. For example, for one project I visited a real construction site and used one of Unity’s tools (VisualLive) to see the virtual model of the building model overlaid on top of the real physical construction.

This makes it very easy to see the difference between the plan and the actual reality, which is very important to avoid clashes and costly mistakes. For another project, I used VR and hand-tracking to demonstrate how someone could showcase a product (say, a car) inside a VR showroom and then interact with it using hand tracking and full-body tracking.

What skills do you use on a daily basis?

The most relevant skill for my role is the ability to break down a larger problem into small steps and then solving each step. That’s really all programming is! That and knowing the right terms to Google to find the solution and enough understanding to implement the solution, or continuing to search if you don’t understand that solution or it is not appropriate for your problem.

Despite my title, I don’t think of myself as highly ‘technical’. I’m an entirely self-taught software developer, and I’m a visual learner, so I watch a lot more YouTube tutorials than you might expect!

Another crucial skill is persistence because VR and AR are emerging and fast-moving technologies that are constantly changing. If I follow a tutorial or try a solution and it doesn’t work, I used to grapple with the feeling that maybe I’m not good enough.

In reality, this technology changes so often that if a tutorial is six months old, it might be out of date. Learning to be resilient and persistent and to ignore my feelings of imposter syndrome was the most important thing I’ve learned on my career journey. Your feelings are not facts, and imposter syndrome is extremely common in this industry.

What are the hardest parts of your working day?

One of the most difficult challenges of my working day is the isolation. I work remotely and many of my team are on a different time zone, so we’re not always able to chat. To overcome that, I prioritise social engagements outside of work.

When I’m extremely busy with my own projects – like the LGBTQ+ VR museum – I go to co-working spaces so that I can at least be around other people during working hours.

I also struggle with time blindness. I have ADHD and working remotely means that it’s easy to get absorbed in a task and forget to take breaks. I set alarms to snap myself out of my ‘trance’ at certain times, like lunchtime. I have to admit though, it doesn’t always work!

Do you have any productivity tips that help you through the day?

My main tip for productivity is to find what works for you, not what works for other people, or what others think should work for you.

For example, I am a night owl. So, starting my day a little later and working into the night, works well for me. It also means I can sync with my team in the US. I don’t find time to play video games, piano or meet up with my friends in the evening, so instead I arrange those things for the morning, which helps me persuade myself to get out of bed!

In the same way, when I was learning to code, people gave me advice like: ‘Break things and fix it, to see how it works’. But that produced a lot of anxiety for me and didn’t work well.

Instead, I learned with my own methods like writing songs, drawing cartoons and even physically printing and gluing code snippets into a notebook and writing the English translation underneath. Code after all, is a language, so I treated it the same way. Find what works for you, even if it’s not conventional!

How has this role changed as this sector has grown and evolved?

I began this role in 2020 and typically – before the pandemic – my job would have been described as a ‘technical evangelism’, which involves a lot of public speaking and travel to conferences.

Of course, that wasn’t really possible, so my role has evolved into creating content of different types – webinars online, videos, onboarding tutorials and technical demos for marketing and sales enablement.

While I really enjoy public speaking, the lack of travel has given me time to get deeply familiar with Unity’s XR tooling and sharpen my technical expertise. This technology is always changing so it’s really important to constantly learn and grow. Luckily, I have an insatiable curiosity and appetite for knowledge. I think all engineers do!

What do you enjoy most about the job?

I have two favourite things about this job. First, the autonomy. Since I have a deep understanding of the tools and our users/audience, I’m trusted to design and propose my own solutions that best meet the user needs.

Secondly, the technology itself. Being able to create VR or AR content is like sorcery! I can conjure anything from nothing. I can create entire worlds that I can step into based only on my imagination. And so can anybody that learns this skill – and it’s easier than you think! That has never stopped being magical and exciting to me, and I don’t think it ever will.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!