Connect with us

Technology

‘A catastrophic failure’: computer scientist Hany Farid on why violent videos circulate on the internet | Social media

Voice Of EU

Published

on

In the aftermath of yet another racially motivated shooting that was live-streamed on social media, tech companies are facing fresh questions about their ability to effectively moderate their platforms.

PaytonGendron, the 18-year-old gunman who killed 10 people in a largely Black neighborhood in Buffalo, New York, on Saturday, broadcasted his violent rampage on the video-game streaming service Twitch. Twitch says it took down the video stream in mere minutes, but it was still enough time for people to create edited copies of the video and share it on other platforms including Streamable, Facebook and Twitter.

So how do tech companies work to flag and take down videos of violence that have been altered and spread on other platforms in different forms – forms that may be unrecognizable from the original video in the eyes of automated systems?

On its face, the problem appears complicated. But according to Hany Farid, a professor of computer science at UC Berkeley, there is a tech solution to this uniquely tech problem. Tech companies just aren’t financially motivated to invest resources into developing it.

Farid’s work includes research into robust hashing, a tool that creates a fingerprint for videos that allows platforms to find them and their copies as soon as they are uploaded. The Guardian spoke with Farid about the wider problem of barring unwanted content from online platforms, and whether tech companies are doing enough to fix the problem.

This interview has been edited for length and clarity. Twitch, Facebook and YouTube did not immediately respond to a request for comment.

Twitch says that it took the Buffalo shooter’s video down within minutes, but edited versions of the video still proliferated, not just on Twitch but on many other platforms. How do you stop the spread of an edited video on multiple platforms? Is there a solution?

It’s not as hard a problem as the technology sector will have you believe. There’s two things at play here. One is the live video, how quickly could and should that have been found and how we limit distribution of that material.

The core technology to stop redistribution is called “hashing” or “robust hashing” or “perceptual hashing”. The basic idea is quite simple: you have a piece of content that is not allowed on your service either because it violated terms of service, it’s illegal or for whatever reason, you reach into that content, and extract a digital signature, or a hash as it’s called.

This hash has some important properties. The first one is that it’s distinct. If I give you two different images or two different videos, they should have different signatures, a lot like human DNA. That’s actually pretty easy to do. We’ve been able to do this for a long time. The second part is that the signature should be stable even if the content is being modified, when somebody changes say the size or the color or adds text. The last thing is you should be able to extract and compare signatures very quickly.

So if we had a technology that satisfied all of those criteria, Twitch would say, we’ve identified a terror attack that’s being live-streamed. We’re going to grab that video. We’re going to extract the hash and we are going to share it with the industry. And then every time a video is uploaded with the hash, the signature is compared against this database, which is being updated almost instantaneously. And then you stop the redistribution.

How do tech companies respond right now and why isn’t it sufficient?

It’s a problem of collaboration across the industry and it’s a problem of the underlying technology. And if this was the first time it happened, I’d understand. But this is not, this is not the 10th time. It’s not the 20th time. I want to emphasize: no technology’s going to be perfect. It’s battling an inherently adversarial system. But this is not a few things slipping through the cracks. Your main artery is bursting. Blood is gushing out a few liters a second. This is not a small problem. This is a complete catastrophic failure to contain this material. And in my opinion, as it was with New Zealand and as it was the one before then, it is inexcusable from a technological standpoint.

But the companies are not motivated to fix the problem. And we should stop pretending that these are companies that give a shit about anything other than making money.

Talk me through the existing issues with the tech that they are using. Why isn’t it sufficient?

I don’t know all the tech that’s being used. But the problem is the resilience to modification. We know that our adversary – the people who want this stuff online – are making modifications to the video. They’ve been doing this with copyright infringement for decades now. People modify the video to try to bypass these hashing algorithms. So [the companies’] hashing is just not resilient enough. They haven’t learned what the adversary is doing and adapted to that. And that is something they could do, by the way. It’s what virus filters do. It’s what malware filters do. [The] technology has to constantly be updated to new threat vectors. And the tech companies are simply not doing that.

Why haven’t companies implemented better tech?

Because they’re not investing in technology that is sufficiently resilient. This is that second criterion that I described. It’s easy to have a crappy hashing algorithm that sort of works. But if somebody is clever enough, they’ll be able to work around it.

When you go on to YouTube and you click on a video and it says, sorry, this has been taken down because of copyright infringement, that’s a hashing technology. It’s called content ID. And YouTube has had this technology forever because in the US, we passed the DMCA, the Digital Millennium Copyright Act that says you can’t host copyright material. And so the company has gotten really good at taking it down. For you to still see copyright material, it has to be really radically edited.

So the fact that not a small number of modifications passed through is simply because the technology’s not good enough. And here’s the thing: these are now trillion-dollar companies we are talking about collectively. How is it that their hashing technology is so bad?

These are the same companies, by the way, that know just about everything about everybody. They’re trying to have it both ways. They turn to advertisers and tell them how sophisticated their data analytics are so that they’ll pay them to deliver ads. But then when it comes to us asking them, why is this stuff on your platform still? They’re like, well, this is a really hard problem.

The Facebook files showed us that companies like Facebook profit from getting people to go down rabbit holes. But a violent video spreading on your platform is not good for business. Why isn’t that enough of a financial motivation for these companies to do better?

I would argue that it comes down to a simple financial calculation that developing technology that is this effective takes money and it takes effort. And the motivation is not going to come from a principled position. This is the one thing we should understand about Silicon Valley. They’re like every other industry. They are doing a calculation. What’s the cost of fixing it? What’s the cost of not fixing it? And it turns out that the cost of not fixing is less. And so they don’t fix it.

Why is it that you think the pressure on companies to respond to and fix this issue doesn’t last?

We move on. They get bad press for a couple of days, they get slapped around in the press and people are angry and then we move on. If there was a hundred-billion-dollar lawsuit, I think that would get their attention. But the companies have phenomenal protection from the misuse and the harm from their platforms. They have that protection here. In other parts of the world, authorities are slowly chipping away at it. The EU announced the Digital Services Act that will put a duty of care [standard on tech companies]. That will start saying, if you do not start reining in the most horrific abuses on your platform, we are going to fine you billions and billions of dollars.

[The DSA] would put pretty severe penalties for companies, up to 6% of global profits, for failure to abide by the legislation and there’s a long list of things that they have to abide by, from child safety issues to illegal material. The UK is working on its own digital safety bill that would put in place a duty of care standard that says tech companies can’t hide behind the fact that it’s a big internet, it’s really complicated and they can’t do anything about it.

And look, we know this will work. Prior to the DMCA it was a free-for-all out there with copyright material. And the companies were like, look, this is not our problem. And when they passed the DMCA, everybody developed technology to find and remove copyright material.

It sounds like the auto industry as well. We didn’t have seat belts until we created regulation that required seat belts.

That’s right. I’ll also remind you that in the 1970s there was a card called a Ford Pinto where they put the gas tank in the wrong place. If somebody would bump into you, your car would explode and everybody would die. And what did Ford do? They said, OK, look, we can recall all the cars, fix the gas tank. It’s gonna cost this amount of dollars. Or we just leave it alone, let a bunch of people die, settle the lawsuits. It’ll cost less. That’s the calculation, it’s cheaper. The reason that calculation worked is because tort reform had not actually gone through. There were caps on these lawsuits that said, even when you knowingly allow people to die because of an unsafe product, we can only sue you for so much. And we changed that and it worked: products are much, much safer. So why do we treat the offline world in a way that we don’t treat the online world?

For the first 20 years of the internet, people thought that the internet was like Las Vegas. What happens on the internet stays on the internet. It doesn’t matter. But it does. There is no online and offline world. What happens on the online world very, very much has an impact on our safety as individuals, as societies and as democracies.

There’s some conversation about duty of care in the context of section 230 here in the US – is that what you envision as one of the solutions to this?

I like the way the EU and the UK are thinking about this. We have a huge problem on Capitol Hill, which is, although everybody hates the tech sector, it’s for very different reasons. When we talk about tech reform, conservative voices say we should have less moderation because moderation is bad for conservatives. The left is saying the technology sector is an existential threat to society and democracy, which is closer to the truth.

So what that means is the regulation looks really different when you think the problem is something other than what it is. And that’s why I don’t think we’re going to get a lot of movement at the federal level. The hope is that between [regulatory moves in] Australia, the EU, UK and Canada, maybe there could be some movement that would put pressure on the tech companies to adopt some broader policies that satisfy the duty here.

Source link

Technology

Welsh council extends contract for Oracle EBS 12.1 • The Register

Voice Of EU

Published

on

Swansea City Council has been forced to extend an IT service provider contract to keep its unsupported and unpatched ERP system up and running because its replacement is running two years behind.

A procurement document published last week shows Infosys was awarded £2 million contract (c $2.40 million) extension, until 30 November 2023, to support the Welsh council’s Oracle eBusiness Suite ERP system while it waits for the replacement Oracle Fusion system to be ready. It takes Infosys’s total for supporting the old system to £6.7 million (c $8.1 million).

Council documents reveal the authority runs its finance and HR systems on EBS R12.1, which moved into Oracle Sustaining Support in January 2022 and will therefore no longer receive new fixes, updates, or security patches.

In 2019, the council approved plans to move to Oracle Fusion [PDF] in the expectation that the new system would be live by November 2020.

However, the council had to change the schedule due to time lost to the COVID-19 pandemic.

Council risked failing its Public Service Network accreditation: report

It said using unsupported software “increased the risk of cyber-attacks and potential data theft” while there was also “a risk payroll may not function, staff and pensioners may not be paid.” The report also said the council risked failing its Public Service Network (PSN) accreditation, which meant it could be prevented from sharing data with the health service, police, and the Department for Work and Pensions (DWP).

However, in March 2020, the council invoked a force majeure clause – which alters parties’ contractual obligations – with the support provider Infosys and began discussions to resume the program.

It opted to suspend the program and start back up in February 2021, with the aim to go live in October 2021. The plans said Infosys had agreed to absorb additional costs for this extension.

It said Oracle had agreed to extend support from November 2020 to 2022 so it could get regular updates and patches. “Although this risk still exists, it has been mitigated,” it said.

Given the council has awarded a contract well after the planned go-live of its replacement, it seems those assumptions are under threat. The council has so far failed to respond to The Register‘s request for comment.

Its Fusion project might provide some lessons for the London Borough of Waltham Forest, which plans to have its core solution – a move away from an ageing SAP ERP system – live within a year of the project’s start.

Other councils have learned about the complexity of ERP projects the hard way. In December last year, The Register revealed that West Sussex County Council was facing a two-year delay to a £7.5 million ($9.2 million) Oracle ERP project to replace its 20-year-old SAP system with Fusion. Surrey County Council has also seen its project to move from SAP to Unit4 delayed and incur additional costs. ®

Source link

Continue Reading

Technology

Sony shifts focus to PC gamers with new Inzone monitors and headsets

Voice Of EU

Published

on

Sony aims to boost the ‘growth of gaming culture’ with two 27-inch monitors and three headsets, designed for both PC and Playstation gamers.

Sony has announced a batch of new monitors and headsets with a focus on PC gamers, as the company looks to reach out to more than its core Playstation audience.

The Inzone range consists of two 27-inch monitors and three headsets, which are all designed to enhance a gamer’s experience. While the main target appears to be PC gamers, the products have features that suit PS5 users.

Sony said its Inzone M9 monitor has 4K resolution and a high contrast with full array local dimming, designed to boost the detail of gaming scenes in deep black and brightness. The monitor also has a 144Hz refresh rate, an IPS display and a 1ms response time. Sony said the monitor will help lead to quicker reactions, which is a clear benefit for competitive PC gamers.

Meanwhile, the M3 monitor will have a 240Hz refresh rate, along with variable refresh rate technology to help gamers “capture movements of rivals in shooter games”.

To go with the monitors, Sony is releasing two wireless headsets, along with the wired Inzone H3 model. The Inzone H9 will have 32 hours of battery life, while the H7 model will have 40 hours.

Speaking on the products, Sony’s head of game business and marketing office Yukihiro Kitajima said there has been a higher interest in gaming with the spread of e-sports tournaments and the advancement of gaming entertainment.

“With Sony’s strong history of high-end audio and visual technology products, we believe this new line will offer even more options for those looking to upgrade their current gaming systems,” Kitajima said.

“We are committed to contributing to the growth of gaming culture by providing PC and PlayStation gamers with a wider range of options to enrich lives through gaming.”

The Inzone headsets range from €300 to €100 and are expected to launch in July, while the M9 monitor is due to launch in the Summer at a cost of €1099. Sony said the pricing and availability of the M3 monitor is expected to be revealed sometime this year.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Technology

Why US women are deleting their period tracking apps | Privacy

Voice Of EU

Published

on

Many American women in recent days have deleted period tracking apps from their cellphones, amid fears the data collected by the apps could be used against them in future criminal cases in states where abortion has become illegal.

The trend already started last month when a draft supreme court opinion that suggested the court was set to overturn Roe v Wade was leaked, and has only intensified since the court on Friday revoked the federal right to abortion.

These concerns are not baseless. As with various other apps, cycle trackers collect, retain and at times share some of their users’ data. In a state where abortion is a crime, prosecutors could request information collected by these apps when building a case against someone. “If they are trying to prosecute a woman for getting an illegal abortion, they can subpoena any app on their device, including period trackers,” said Sara Spector, a Texas-based criminal defense attorney, and ex-prosecutor.

“But every company has their individual storage and privacy policy about how they use and how long they store data,” Spector added.

Cycle trackers are popular for a reason. Nearly a third of American women have been using them, according to a 2019 survey published by the Kaiser Family Foundation. They have helped make women’s lives easier in many ways, from family planning and detecting early signs of health issues to choosing the perfect time for a holiday.

A 2019 study published in the British Medical Journal (BMJ) found that 79% of health apps available through the Google Play store that were related to medicine, including apps that help manage drugs, adherence, medicines, or prescribing information, regularly shared user data and were “far from transparent”. But many of the big players have made progress over the past years.

A smartphone sits on a light wooden table showing the period tracker app Clue in the Google Play store.
The Berlin-based period tracker app Clue says it does not store sensitive personal data without the user’s explicit permission. Photograph: Piotr Swat/Alamy

Two of the most popular period trackers in the US, Flo and Clue, have more than 55 million users combined. The Berlin-based app Clue said it was “committed to protecting” users’ private health data and that it was operating under strict European GDPR laws. The company’s website says the app collects device data, event and usage data, in addition to a user’s IP address, health and sensitive data it may use for the purpose of improving the app, the services, and preventing abusive use of its service. But Clue does not track users’ precise location, and says it does not store sensitive personal data without a user’s explicit permission. The company also tweeted that it would have a “primary legal duty under European law” not to disclose any private health data and it would “not respond to any disclosure request or attempted subpoena of their users’ health data by US authorities”.

But just because data is being processed by a European company, doesn’t mean that it is entirely immune from US prosecution, said Lucie Audibert, a lawyer at Privacy International, a global NGO that researches, litigates and advocates against abuses of technology and data by governments and corporations.

“The fact that GDPR applies is not that relevant in this case. When it comes to a legitimate legal request from US authorities European companies usually comply. Also, a European company may be hosting data outside the EU, making it subject to different legal frameworks and cross-border agreements,” Audibert added. She also stressed that using a Europe-based app won’t protect women from the courts requesting data from them directly. But it can be a slightly better option than using a US-based one because US companies are more easily compelled to comply with American authorities and courts’ requests. Enforcement is more difficult against European ones.

Flo has come under fire for sharing its users’ data before. The company says on its website it only uses data “for research activities” and that it only uses “de-identified or aggregated data, which cannot be associated” with specific users. But an investigative piece by the Wall Street Journal has found that the app informed Facebook when a user was on their period or if they intended to get pregnant. In 2021, the Federal Trade Commission (FTC) reached a settlement with Flo. Under the settlement, Flo must undergo an independent review of its privacy policy and obtain user permissions before sharing personal health information. Flo did not admit any wrongdoing.

On Friday, Flo announced that it will soon be launching an “Anonymous mode” that can help keep users’ data safe in any circumstances.

The company did not respond to a request for comment.

A relatively new, astrology-focused period tracker, Stardust, became the most downloaded free app on iOS in the days after the supreme court’s decision. Stardust’s Twitter bio says it is a “privacy first period tracking app”. But as Vice News reported, the company stated in its privacy policy that if authorities ask for user data, it will comply, whether legally required to or not. It said that the data was “anonymized” and “encrypted”.

“We may disclose your anonymized, encrypted information to third parties in order to protect the legal rights, safety and security of the company and the users of our services; enforce our terms of service; prevent fraud; and comply with or respond to law enforcement or a legal process or a request for cooperation by a government or other entity, whether or not legally required,” their privacy policy stated as of Monday.

Following Vice’s request for comment, Stardust changed its privacy policy to omit the phrase about cooperating with law enforcement “whether or not legally required” to “when legally required”.

Stardust did not immediately respond to a request for comment.

Planned Parenthood encourages people to use their app Spot On. “People who want to track their periods and birth control always have the option to remain anonymous by using the Spot On app without creating an account,” the organization said in a statement. “This way, period or birth control data is only saved locally to a person’s phone and can be deleted at any time by deleting the app.”

Third-party apps are not the only option when it comes to period trackers. Apple has a built-in cycle tracker in its Health app that offers more privacy than most external apps. With just a few steps, one can turn off the storing of their health data in iCloud, and it also has the option to store the encrypted data on their computer or phone.

Evan Greer, deputy director of the non-profit advocacy group Fight for the Future, said the best way to protect sensitive health data was to only use apps that store data locally rather than in the cloud. “Because any app where a company [that could receive a subpoena] has access to their users’ data could make it vulnerable for a legal request.”

An image of an Apple iPhone screen shows app icons, including the Health app.
Apple’s Health app has a built-in cycle tracker that offers users privacy. Photograph: Richard Sheppard/Alamy

Eva Blum-Dumontet, a tech policy consultant, said, “It is normal that in times of concern, people are looking differently at technology and apps that we trusted.

“I think when there is a discourse around whether women should delete these apps, we have to think about why they use them in the first place,” Blum-Dumontet said. “These trackers help them manage menstrual cycle when they are experiencing pain.”

Blum-Dumontet stressed that instead of asking users to change their behaviors, “it is period trackers that should change their practices”.

“They should never have owned so much data in the first place. If they adopted practices like storing data locally and minimizing the data to what’s strictly necessary we wouldn’t be having this debate now. It’s not too late for them to do the right thing,” she said.

“The companies that have been making a profit out of women’s bodies need to think very carefully about how they will protect their users,” she continued.They haven’t all been the best in the past when it comes to data sharing. The only way they can survive in this market, the only way they can make themselves trustworthy is by improving their privacy policy and giving users more control over their data,” she said. “If any of these apps will be used in court against their users, it will not be good PR for them.”

Melissa, a 27-year-old mother from Texas who is goingby only her first name to not jeopardize her employment, said she deleted the app because she fears that when she travels, her state could use her missed period data against her.

“I will miss using the app so much. I have used it for so many things, like tracking my ovulation or predicting my mood changes. Sometimes I wake up feeling irritable, and I don’t know why until my app tells me that this could be normal at this point of my cycle,” she added. Melissa also says she would have loved to use it for future conceptions, but now she can’t.

Although much of the warnings on Friday were focused on just period trackers these are not the only apps that can be used against users when it comes to criminal prosecution, experts warned.

“Google Maps or a random game on your phone could just as easily be weaponized against someone as a menstrual tracking app,” Greer said. “While we need to educate each other and take precautions, it’s not OK to put the responsibility solely on individuals. Companies and lawmakers need to act immediately to protect people.”

The concerns over period tracking data are part of a broader conversation about the amount of personal information smartphones collect. Women’s rights organizations all over the world are warning users to be more mindful of their digital presence, not just when it comes to period trackers.

Cycle tracking apps can be hugely useful for many women, said Jonathan Lord, UK medical director for MSI Reproductive Choices. “But all data can be used against you.”

According to Lord, this danger will remain until “we treat abortion like all other healthcare – regulated like all other medical procedures, but not criminalized”.



Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!