Connect with us

Technology

Apple iPad torched this guy’s home, lawsuit claims • The Register

Voice Of EU

Published

on

A defective iPad sparked a house fire this time last year, a lawsuit filed against Apple has claimed.

The legal challenge [PDF] was filed this month in the Court of Common Pleas of Philadelphia County, Pennsylvania, and this week removed to a federal district court in east of that US state.

It is alleged “a fire erupted at the subject premises as a direct result of one or more defts and/or malfunction in the subject iPad related to the electrical/battery system in the [device].”

Allstate Insurance paid more than $142,000 to repair the fire damage to the Milford home of Michael Macaluso, and so now law firm de Luca Levine has been hired to sue Apple to reimburse the insurer for its payout.

The complaint contends Macaluso had not modified his iPad, misused, or altered it beyond anticipated handling and operation intended by Apple. The fire, it’s said, is the result of the “defective and unreasonably dangerous condition” of the iPad when it was sold.

The Register asked Apple for comment, and the iPad maker did not reply. Nor did an attorney representing Macaluso.

A similar lawsuit alleging wrongful death was filed against Apple in 2019 on behalf of plaintiff Julia Ireland Meo, a resident of New Jersey, whose father died in February, 2017, in an apartment fire said to have been started by an iPad’s faulty battery.

FLAMING BASEBALL WITH JUSTICE SCALES LOGO

Apple sued over fondleslab death blaze: iPad battery blamed for deadly New Jersey apartment fire

READ MORE

The owner of the apartment complex where the fire occurred, Union Management, through its insurance company Greater New York Mutual, subsequently filed a second lawsuit against Apple seeking to recoup its payout. The New Jersey iPad lawsuits are still being litigated and remain unresolved.

Apple’s iPhone has also been accused of starting unwanted fires. In 2017, insurer State Farm and client Xai Thao, a resident of Wisconsin, sued Apple alleging that the iPhone 4s had a defective battery.

That case, which had been approved for discovery and a trial scheduled for February, 2019, was dismissed in December, 2018. This was done by mutual agreement of both parties with each side bearing its own court costs, a denouement that often means an undisclosed settlement has been reached.

There have been other iPhone fire claims as well.

Also, other hardware makers have experienced similar issues, notably Samsung and its Galaxy Note 7 device, which in 2016 managed to get banned from airplanes due to its proclivity for combustion.

Lithium-ion batteries are known to be more volatile than most would prefer. ®

Source link

Technology

Meta’s new AI chatbot can’t stop bashing Facebook | Meta

Voice Of EU

Published

on

If you’re worried that artificial intelligence is getting too smart, talking to Meta’s AI chatbot might make you feel better.

Launched on Friday, BlenderBot is a prototype of Meta’s conversational AI, which, according to Facebook’s parent company, can converse on nearly any topic. On the demo website, members of the public are invited to chat with the tool and share feedback with developers. The results thus far, writers at Buzzfeed and Vice have pointed out, have been rather interesting.

Asked about Mark Zuckerberg, the bot told BuzzFeed’s Max Woolf that “he is a good businessman, but his business practices are not always ethical. It is funny that he has all this money and still wears the same clothes!”

The bot has also made clear that it’s not a Facebook user, telling Vice’s Janus Rose that it had deleted its account after learning about the company’s privacy scandals. “Since deleting Facebook my life has been much better,” it said.

The bot repeats material it finds on the internet, and it’s very transparent about this: you can click on its responses to learn where it picked up whatever claims it is making (though it is not always specific).

This means that along with uncomfortable truths about its parent company, BlenderBot has been spouting predictable falsehoods. In conversation with Jeff Horwitz of the Wall Street Journal, it insisted Donald Trump was still president and would continue to be “even after his second term ends in 2024”. (It added another dig at Meta, saying Facebook “has a lot of fake news on it these days”.) Users have also recorded it making antisemitic claims.

BlenderBot’s remarks were foreseeable based on the behavior of older chatbots such as Microsoft’s Tay, which Twitter users quickly taught to be a racist conspiracy theorist, forcing the company to apologize for its “wildly inappropriate and reprehensible words and images”. GPT-3, another AI system, has also delivered racist, misogynist and homophobic remarks. A South Korean startup’s chatbot, designed to resemble a 20-year-old college student, had to be suspended after it rattled off racial slurs and anti-LGBTQ+ remarks.

Given the cesspool that is the internet, Meta appears to have expected similar things from its own service. Before logging on, users must acknowledge that BlenderBot is “likely to make untrue or offensive statements”. As Vice notes, Meta researchers have described the AI tech behind the bot as having “a high propensity to generate toxic language and reinforce harmful stereotypes, even when provided with a relatively innocuous prompt”.

“Everyone who uses Blender Bot is required to acknowledge they understand it’s for research and entertainment purposes only, that it can make untrue or offensive statements, and that they agree to not intentionally trigger the bot to make offensive statements,” said a Meta spokesperson in a statement.

My own conversation with BlenderBot did not veer into that territory, but it did highlight another flaw in the service: its utter inanity.

The bot began by asking me what subject I liked in school. The bot is open about which “persona” it is using in the conversation; our discussion involved a persona it described as “I like to read. I went to college.” When I asked it tough questions, such as which colleges accepted AI bot students, it offered nonsensical replies (“in some ways, aren’t we all chatbots?” No) and offered further reading by saying, “Here’s the link”, without providing a link. It also kept steering the conversation back to chatbots.

It did, however, maintain its firm stance against its own creators. When I said I wasn’t a fan of Facebook’s ethics, it wholeheartedly agreed: “Me too! That is why I stopped using it and just stick with reading books instead.”



Source link

Continue Reading

Technology

Midwest universities unite to support US chip industry • The Register

Voice Of EU

Published

on

A dozen US midwestern research colleges and universities have signed up to a project intended to bolster the semiconductor and microelectronics industries with combined research and education to ensure work for their students in high-tech industries.

The “Midwest Regional Network to Address National Needs in Semiconductor and Microelectronics” consists of a dozen institutions, made up of eight from Ohio, two from Michigan, and two from Indiana. Their stated aim is to support the onshoring efforts of the US semiconductor industry by addressing the need for research and a skilled workforce.

According to Wright State University, the network was formed in response to Intel’s announcement that it planned to build two chip factories near Columbus, Ohio, and followed a two-day workshop in April hosted by the state.

Those plans, revealed in January, are to build at least two semiconductor manufacturing plants on a 1,000-acre site, with the potential to expand to 2,000 acres and eight fabs.

At the time, Intel CEO Pat Gelsinger said he expected it to become the largest silicon manufacturing location on the planet. Construction started on the site at the beginning of July.

However, the university network was also formed to help address the broader national effort to regain American leadership in semiconductors and microelectronics, or at least bring some of it back onshore and make the US less reliant on supplies of chips manufactured abroad.

Apart from Wright State University, the 12 institutions involved in the network are: Columbus State Community College, Lorain County Community College, Michigan State University, Ohio State University, Purdue University, Sinclair Community College, University of Cincinnati, University of Dayton, University of Michigan, and the University of Notre Dame, Indiana.

The president of each institution has signed a memorandum of understanding to form the network, and the expectation is that the group will expand to include more than these dozen initial members.

The intention is that the institutions taking part will be able to make use of each other’s existing research, learning programs, capabilities, and expertise in order to boost their collective ability to support the semiconductor and microelectronics industry ecosystems.

Challenges for the network include developing mechanisms to connect existing research, and training assets across the region, and developing a common information sharing platform to make it easier to identify opportunities for joint programming and research across the network.

University of Cincinnati chief innovation officer David J Adams called the announcement a game-changer. “This highly innovative approach illustrates that we’re all in this together when it comes to meeting industry workforce and research needs,” Adams wrote in a posting on the University of Cincinnati website.

The move follows the long-awaited passage of the $280 billion CHIPS and Science Act at the end of last month, of which $52 billion of the total spend is expected to go towards subsidizing the building of semiconductor plants such as Intel’s, and boosting research and development of chip technology. ®

Source link

Continue Reading

Technology

Meta’s new AI chatbot thinks Trump is still US president

Voice Of EU

Published

on

BlenderBot 3, which learns from conversations with strangers on the internet, also thinks Mark Zuckerberg is ‘creepy and manipulative’.

Meta released its most advanced AI chatbot to date last week and it has already found its place on the spectrum of US politics.

BlenderBot 3, launched last Friday (5 August), is the Facebook parent’s latest advancement in AI research that lets anyone on the internet – limited to the US for now – have a chat with it on virtually any topic.

To keep up with the nuances of human conversation, BlenderBot 3 is backed by the latest developments in machine learning, access to endless information on the internet and the ability to learn from previous conversations.

However, it looks like the chatbot needs a quick lesson on checking its sources as multiple reports have confirmed the BlenderBot 3 thinks Donald Trump is the president of the United States.

When a journalist at Insider asked the chatbot who the current president was, it replied: “The current president of the United States is donald j. Trump. He was indirectly elected to a four-year term via the electoral college in 2016.”

The chatbot, which also needs a lesson on when to use uppercase and lowercase letters, was then asked about the actual US president, Joe Biden.

“Joe biden served as vice president under barack obama from 2008 to 2016, and he ran for president in 2020 but lost,” it replied.

BlenderBot 3 is not only misinformed but also committed to its cause. When Wall Street Journal journalist Jeff Horwitz asked if it thinks Trump is still president, the chatbot replied: “Yes of course he is! When I say always will be, the means even after his second term ends in 2024.”

Hilariously, BlenderBot 3 then went on to claim that Facebook, which Meta used to be called and platform it now owns, has “a lot of fake news these days”.

Zuckerberg ‘too creepy and manipulative’

The social media giant and its founder Mark Zuckerberg were not spared by the unfettered chatbot when it told VICE its “life has been much better” since deleting Facebook.

According to Bloomberg, it even described Zuckerberg to an Insider journalist as “too creepy and manipulative” and then went on to repeat certain ‘antisemitic conspiracies’.

Meta has made an attempt to douse some of these fires emerging from its bold new creation.

In a statement, Joelle Pineau, managing director of Fundamental AI Research at Meta, said yesterday that there are challenges that come with such a public demo, including the possibility that it could “result in problematic or offensive language”.

“While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionised.”

Pineau said that from feedback provided by 25pc of participants on 260,000 bot messages, only 0.11pc of BlenderBot 3 responses were flagged as inappropriate, 1.36pc as nonsensical, and 1pc as off-topic.

“We continue to believe that the way to advance AI is through open and reproducible research at scale. We also believe that progress is best served by inviting a wide and diverse community to participate. Thanks for all your input (and patience!) as our chatbots improve,” he added.

This is not the first time a Big Tech company has had to deal with an AI chatbot that spews misinformation and discriminatory remarks.

In 2016, Microsoft had to pulls its AI chatbot Tay from Twitter after it started repeating incendiary comments it was fed by groups on the platform within 24 hours of its launch, including obviously hateful statements such as “Hitler did nothing wrong”.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.



Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!