Connect with us

Technology

How photo encryption could safeguard our online images

Voice Of EU

Published

on

To stop our images from being compromised, researchers developed photo encryption technology compatible with major cloud storage services.

Cloud services for storing photos have a serious security role. Google Photos, for instance, must safeguard more than 1bn users alongside their weekly uploads of 28bn photos and videos.

Many of these photos will intentionally end up on our Facebook, Instagram or Twitter feeds, but others we would prefer not to share.

Now, a team of researchers has developed technology in an effort to protect our images and make sure that what we consider private stays private.

“There are many cases of employees at online services abusing their insider access to user data, like SnapChat employees looking at people’s private photos,” said John S Koh, the lead author of the paper.

“There have even been bugs that reveal random users’ data to other users, which actually happened with a bug in Google Photos that revealed users’ private videos to other entirely random users.”

While this presents an obvious problem, that doesn’t mean it’s time to switch back to polaroids just yet. Koh’s intended solution was photo encryption.

Unfortunately, many services such as Google Photos don’t allow for encryption. These systems often incorporate compression to reduce file size, but this would corrupt any encrypted images.

Another issue would be thumbnails. These miniature snapshots are great for browsing your gallery but aren’t currently compatible with encryption techniques.

Some third-party services have managed photo encryption in their hosting, but all of these require migrating from the bigger services such as Google.

Solving this problem was the focus of the research team. Its system, dubbed Easy Secure Photos (ESP), encrypts images uploaded to cloud services so only the original user can view the images.

ESP employs a photo-encryption algorithm where the resulting files can be compressed and can still be recognised as images. To anyone who isn’t the authorised user, these images will just look like static.

Support Silicon Republic

Encrypting each image results in three black-and-white files, each one encoding details about the original image’s red, green, or blue data. Impressively, ESP also creates and uploads encrypted thumbnail images to cloud photo services. That way, the authorised user can browse thumbnail galleries by incorporating ESP.

“Our system adds an extra layer of protection beyond your password-based account security,” said Koh, who designed and implemented ESP.

“The goal is to make it so that only your devices can see your sensitive photos, and no one else unless you specifically share it with them.”

The researchers wanted to make sure that each user could use multiple devices to access their online photos if desired.

The problem is the same digital code or ‘key’ used to encrypt a photo has to be the same one used to decrypt the image, making multi-device functionality a research riddle.

“Lots of work has shown that users do not understand keys and requiring them to move them around from one device to another is a recipe for disaster, either because the scheme is too complicated for users to use, or because they copy the key the wrong way and inadvertently give everyone access to their encrypted data,” explained Koh.

On this system, all a user has to do in order to bypass the photo encryption is to verify their new device with one that has previously logged into an ESP-enabled app.

To prove the efficacy of their technology, the researchers implemented ESP in Simple Gallery, a popular photo gallery app on Android.

It could successfully encrypt images from Google Photos, Flickr and Imgur without changes needed to any of these cloud photo services and showed only minor increases in download and upload time.

“We are experiencing the beginning of a major technological boom where even average users move towards moving all their data into the cloud. This comes with great privacy concerns that have only recently started rearing their ugly heads, such as the increasing number of discovered cases of cloud service employees looking at private user data,” Koh said.

“Users should have an option to protect their data that they think is really important in these popular services, and we explore just one practical solution for this.”

Source link

Technology

Meta’s new AI chatbot can’t stop bashing Facebook | Meta

Voice Of EU

Published

on

If you’re worried that artificial intelligence is getting too smart, talking to Meta’s AI chatbot might make you feel better.

Launched on Friday, BlenderBot is a prototype of Meta’s conversational AI, which, according to Facebook’s parent company, can converse on nearly any topic. On the demo website, members of the public are invited to chat with the tool and share feedback with developers. The results thus far, writers at Buzzfeed and Vice have pointed out, have been rather interesting.

Asked about Mark Zuckerberg, the bot told BuzzFeed’s Max Woolf that “he is a good businessman, but his business practices are not always ethical. It is funny that he has all this money and still wears the same clothes!”

The bot has also made clear that it’s not a Facebook user, telling Vice’s Janus Rose that it had deleted its account after learning about the company’s privacy scandals. “Since deleting Facebook my life has been much better,” it said.

The bot repeats material it finds on the internet, and it’s very transparent about this: you can click on its responses to learn where it picked up whatever claims it is making (though it is not always specific).

This means that along with uncomfortable truths about its parent company, BlenderBot has been spouting predictable falsehoods. In conversation with Jeff Horwitz of the Wall Street Journal, it insisted Donald Trump was still president and would continue to be “even after his second term ends in 2024”. (It added another dig at Meta, saying Facebook “has a lot of fake news on it these days”.) Users have also recorded it making antisemitic claims.

BlenderBot’s remarks were foreseeable based on the behavior of older chatbots such as Microsoft’s Tay, which Twitter users quickly taught to be a racist conspiracy theorist, forcing the company to apologize for its “wildly inappropriate and reprehensible words and images”. GPT-3, another AI system, has also delivered racist, misogynist and homophobic remarks. A South Korean startup’s chatbot, designed to resemble a 20-year-old college student, had to be suspended after it rattled off racial slurs and anti-LGBTQ+ remarks.

Given the cesspool that is the internet, Meta appears to have expected similar things from its own service. Before logging on, users must acknowledge that BlenderBot is “likely to make untrue or offensive statements”. As Vice notes, Meta researchers have described the AI tech behind the bot as having “a high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt”.

“Everyone who uses Blender Bot is required to acknowledge they understand it’s for research and entertainment purposes only, that it can make untrue or offensive statements, and that they agree to not intentionally trigger the bot to make offensive statements,” said a Meta spokesperson in a statement.

My own conversation with BlenderBot did not veer into that territory, but it did highlight another flaw in the service: its utter inanity.

The bot began by asking me what subject I liked in school. The bot is open about which “persona” it is using in the conversation; our discussion involved a persona it described as “I like to read. I went to college.” When I asked it tough questions, such as which colleges accepted AI bot students, it offered nonsensical replies (“in some ways, aren’t we all chatbots?” No) and offered further reading by saying, “Here’s the link”, without providing a link. It also kept steering the conversation back to chatbots.

It did, however, maintain its firm stance against its own creators. When I said I wasn’t a fan of Facebook’s ethics, it wholeheartedly agreed: “Me too! That is why I stopped using it and just stick with reading books instead.”



Source link

Continue Reading

Technology

Midwest universities unite to support US chip industry • The Register

Voice Of EU

Published

on

A dozen US midwestern research colleges and universities have signed up to a project intended to bolster the semiconductor and microelectronics industries with combined research and education to ensure work for their students in high-tech industries.

The “Midwest Regional Network to Address National Needs in Semiconductor and Microelectronics” consists of a dozen institutions, made up of eight from Ohio, two from Michigan, and two from Indiana. Their stated aim is to support the onshoring efforts of the US semiconductor industry by addressing the need for research and a skilled workforce.

According to Wright State University, the network was formed in response to Intel’s announcement that it planned to build two chip factories near Columbus, Ohio, and followed a two-day workshop in April hosted by the state.

Those plans, revealed in January, are to build at least two semiconductor manufacturing plants on a 1,000-acre site, with the potential to expand to 2,000 acres and eight fabs.

At the time, Intel CEO Pat Gelsinger said he expected it to become the largest silicon manufacturing location on the planet. Construction started on the site at the beginning of July.

However, the university network was also formed to help address the broader national effort to regain American leadership in semiconductors and microelectronics, or at least bring some of it back onshore and make the US less reliant on supplies of chips manufactured abroad.

Apart from Wright State University, the 12 institutions involved in the network are: Columbus State Community College, Lorain County Community College, Michigan State University, Ohio State University, Purdue University, Sinclair Community College, University of Cincinnati, University of Dayton, University of Michigan, and the University of Notre Dame, Indiana.

The president of each institution has signed a memorandum of understanding to form the network, and the expectation is that the group will expand to include more than these dozen initial members.

The intention is that the institutions taking part will be able to make use of each other’s existing research, learning programs, capabilities, and expertise in order to boost their collective ability to support the semiconductor and microelectronics industry ecosystems.

Challenges for the network include developing mechanisms to connect existing research, and training assets across the region, and developing a common information sharing platform to make it easier to identify opportunities for joint programming and research across the network.

University of Cincinnati chief innovation officer David J Adams called the announcement a game-changer. “This highly innovative approach illustrates that we’re all in this together when it comes to meeting industry workforce and research needs,” Adams wrote in a posting on the University of Cincinnati website.

The move follows the long-awaited passage of the $280 billion CHIPS and Science Act at the end of last month, of which $52 billion of the total spend is expected to go towards subsidizing the building of semiconductor plants such as Intel’s, and boosting research and development of chip technology. ®

Source link

Continue Reading

Technology

Meta’s new AI chatbot thinks Trump is still US president

Voice Of EU

Published

on

BlenderBot 3, which learns from conversations with strangers on the internet, also thinks Mark Zuckerberg is ‘creepy and manipulative’.

Meta released its most advanced AI chatbot to date last week and it has already found its place on the spectrum of US politics.

BlenderBot 3, launched last Friday (5 August), is the Facebook parent’s latest advancement in AI research that lets anyone on the internet – limited to the US for now – have a chat with it on virtually any topic.

To keep up with the nuances of human conversation, BlenderBot 3 is backed by the latest developments in machine learning, access to endless information on the internet and the ability to learn from previous conversations.

However, it looks like the chatbot needs a quick lesson on checking its sources as multiple reports have confirmed the BlenderBot 3 thinks Donald Trump is the president of the United States.

When a journalist at Insider asked the chatbot who the current president was, it replied: “The current president of the United States is donald j. Trump. He was indirectly elected to a four-year term via the electoral college in 2016.”

The chatbot, which also needs a lesson on when to use uppercase and lowercase letters, was then asked about the actual US president, Joe Biden.

“Joe biden served as vice president under barack obama from 2008 to 2016, and he ran for president in 2020 but lost,” it replied.

BlenderBot 3 is not only misinformed but also committed to its cause. When Wall Street Journal journalist Jeff Horwitz asked if it thinks Trump is still president, the chatbot replied: “Yes of course he is! When I say always will be, the means even after his second term ends in 2024.”

Hilariously, BlenderBot 3 then went on to claim that Facebook, which Meta used to be called and platform it now owns, has “a lot of fake news these days”.

Zuckerberg ‘too creepy and manipulative’

The social media giant and its founder Mark Zuckerberg were not spared by the unfettered chatbot when it told VICE its “life has been much better” since deleting Facebook.

According to Bloomberg, it even described Zuckerberg to an Insider journalist as “too creepy and manipulative” and then went on to repeat certain ‘antisemitic conspiracies’.

Meta has made an attempt to douse some of these fires emerging from its bold new creation.

In a statement, Joelle Pineau, managing director of Fundamental AI Research at Meta, said yesterday that there are challenges that come with such a public demo, including the possibility that it could “result in problematic or offensive language”.

“While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionised.”

Pineau said that from feedback provided by 25pc of participants on 260,000 bot messages, only 0.11pc of BlenderBot 3 responses were flagged as inappropriate, 1.36pc as nonsensical, and 1pc as off-topic.

“We continue to believe that the way to advance AI is through open and reproducible research at scale. We also believe that progress is best served by inviting a wide and diverse community to participate. Thanks for all your input (and patience!) as our chatbots improve,” he added.

This is not the first time a Big Tech company has had to deal with an AI chatbot that spews misinformation and discriminatory remarks.

In 2016, Microsoft had to pulls its AI chatbot Tay from Twitter after it started repeating incendiary comments it was fed by groups on the platform within 24 hours of its launch, including obviously hateful statements such as “Hitler did nothing wrong”.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.



Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!