Connect with us

Technology

Insurance startup backtracks on running videos of claimants through AI lie detector • The Register

Published

on

An insurance biz has retracted boasts of how it uses AI algorithms to study videos of customers for “non-verbal cues” that their claims are fraudulent. The marketing U-turn came after the ethics of this approach was called publicly and loudly into question.

Using machine-learning software to automate decision-making processes to decide whether to accept or deny customers credit or insurance payments is particularly sensitive. Last month, America’s consumer watchdog, the FTC, issued a strongly worded statement warning that it was illegal to deploy algorithms end up discriminating against people based on their race, color, religion, national origin, sex, marital status, and age when making financial-related decisions.

Alarm bells were set off when Lemonade, a company based in New York, admitted it built software that scanned videos of customers explaining the situations they found themselves, which were submitted as part of insurance claims, to decide whether those people were essentially lying or committing some other fraudulent.

Lemonade prides itself on providing an easier and simpler way for people to file pet, home, and life insurance claims. Customers speak to a chat bot, submit their claim, and a decision on how much it should pay them can be made in a few minutes.

“When a user files a claim, they record a video on their phone and explain what happened. Our AI carefully analyzes these videos for signs of fraud. It can pick up non-verbal cues that traditional insurers can’t, since they don’t use a digital claims process,” Lemonade stated in a series of tweets that have since been deleted.

Netizens criticized Lemonade’s technology, accusing it of being potentially biased and reliant on flimsy sentiment and emotion analysis. The backlash on Twitter prompted the company to delete its posts and issue a new statement, where it claimed it just used facial recognition algorithms to make sure the same person wasn’t making multiple claims.

“There was a sizable discussion on Twitter around a poorly worded tweet of ours (mostly the term ‘non-verbal cues’) which led to confusion as to how we use customer videos to process claims,” the upstart stated on its website. “There were also questions about whether we use approaches like emotion recognition (we don’t), and whether AI is used to automatically decline claims (never!)”

“We do not use, and we’re not trying to build, AI that uses physical or personal features to deny claims,” it reiterated.

That said, the company’s privacy policy does say it collects, among other details, people’s physical characteristics when handling life insurance.

And in a filing to America’s financial regulator, the SEC, Lemonade said its system collects roughly 1,700 “data points” from customers.

“We use technology and artificial intelligence to reduce hassle, time, and cost associated with purchasing insurance and the claims submission and fulfillment process. We built our entire company on a unified, proprietary, state-of-the-art technology platform. Our customers are able to purchase insurance on our website or through our app, generally in a matter of minutes. Our artificial intelligence system handles substantially all of our customer onboarding and a meaningful portion of our claims,” it said in the filing.

What those data points describe is unclear. It did admit its own technology could have unintended consequences, where customers were paid too much or too little, leading to biased and discriminatory decisions. On the one hand, this is a boilerplate warning to investors and the financial markets that the biz could go belly up, and thus investments could be lost, though on the other hand, it is pretty specific about how it could go wrong.

“Our proprietary artificial intelligence algorithms may not operate properly or as we expect them to, which could cause us to write policies we should not write, price those policies inappropriately or overpay claims that are made by our customers. Moreover, our proprietary artificial intelligence algorithms may lead to unintentional bias and discrimination.”

It added:

The company was launched in 2016, and operates across the US and parts of Europe, including France, Germany, and the Netherlands. It has yet to turn a profit, and spends most of its money on sales and marketing.

“Our future success depends on our ability to continue to develop and implement our proprietary artificial intelligence algorithms, and to maintain the confidentiality of this technology,” it said.

The Register has asked Lemonade for further comment. ®



Source link

Technology

Got an idea for the future of science in Ireland?

Published

on

The Creating Our Future initiative is seeking 10,000 ideas on which to base Ireland’s next science and research agenda.

The Government of Ireland is hosting a ‘national brainstorm’ to guide the future of science and research in the country.

First announced last month, a nationwide conversation about research and innovation has officially kicked off today (28 July) at CreatingOurFuture.ie.

The online portal aims to collect 10,000 ideas from a broad section of the Irish public. It will be open for submissions from now until the end of November.

‘Nobody has a monopoly on good ideas’
– SIMON HARRIS, TD

“Covid-19 has highlighted, like never before, the vital role that research has played in mitigating challenges facing the country,” said Minister for Research, Innovation and Science Simon Harris, TD. “But we have many more challenges and opportunities that research rigour and analytical excellence can help us with to build a better future for Ireland.”

Harris added: “Good ideas and curiosity are the starting point for most research, and nobody has a monopoly on good ideas. So, we are asking everyone to submit that idea that they have been thinking about, or have a conversation with their neighbours, host an event with a researcher or in your local community to think about what might make a difference and let us know.”

Events will be held across the country until the Creating Our Future ideas portal closes, inviting and encouraging citizens and communities to engage with the project.

The national initiative is itself an idea borrowed from similar efforts in other countries. A key inspiration was a programme driven by FWO, the Flanders research foundation. Launched in the spring of 2018, its Question for Science campaign received 10,559 responses, and has returned answers to more than 1,500.

These questions formed the basis of the Flemish Science Agenda, a strategy for science and innovation that is built on societal issues and citizens’ curiosity. Questions asked of FWO included ‘What is the effect of the 24-hour economy on psychological health?’ and ‘How can we avoid war and violence?’.

The Irish effort is hoped to deepen relationships between the Irish science community and the public it serves, and the resounding call from organisers is for all to participate.

Support Silicon Republic

“This is an important opportunity to contribute to shaping future research. I encourage everyone to get involved,” said Taoiseach Micheál Martin, TD.

“This isn’t for any one section of society, we want to engage everyone in conversations in communities across the country, to inspire curiosity and generate ideas for research that will shape our future.”

All responses submitted to the portal will be collated and shared with an independent expert panel of researchers and civil society leaders.

There is also a Creating Our Future advisory forum chaired by Nokia Bell Labs global head of external collaboration programmes, Julie Byrne. In this role, Byrne brings researchers together for collaborative work and she herself has almost 30 years’ experience in engineering, tech and research.

“Over the coming months we will have many conversations about research across the country to gather ideas from our communities that research can tackle to create a better future for all of us,” she said. “I encourage everyone to get involved so that we capture ideas from all communities across the country.”

The results of the campaign will be published in a report by the end of 2021. This will go on to inform Ireland’s future strategy for research, innovation, science and technology.

Previously, Science Foundation Ireland’s director of science for society called on Irish citizens join a mass public debate about lessons learned throughout the Covid-19 pandemic.

Dr Ruth Freeman spoke at Future Human in 2020 about the importance of including the voice of the public in shaping the future of science.

“Giving people more of a say in their future is clearly the right and democratic thing to do, and it might just make for better science as well,” she said.

Source link

Continue Reading

Technology

‘Disinfo kills’: protesters demand Facebook act to stop vaccine falsehoods | Facebook

Published

on

Activists are to descend on Facebook’s Washington headquarters on Wednesday to demand the company take stronger action against vaccine falsehoods spreading on its platform.

Protesters are planning to cover the lawn in front of Facebook’s office with body bags that read “disinfo kills” as a symbol of the harm caused by online disinformation, as Covid cases surge in the US.

The day of protest has been organized by a group of scholars, advocates and activists calling themselves the “Real” Oversight Board. The group is urging Facebook’s shareholders to ban so-called misinformation “superspreaders” – the small number of accounts responsible for the majority of false and misleading content about the Covid-19 vaccines.

“People are making decisions based on the disinformation that’s being spread on Facebook,” said Shireen Mitchell, Member of the Real Facebook Oversight Board and founder of Stop Online Violence Against Women. “If Facebook is not going to take that down, or if all they’re going to do is put out disclaimers, then fundamentally Facebook is participating in these deaths as well.”

In coordination with the protest, the Real Oversight Board has released a new report analyzing the spread of anti-vaccine misinformation on Facebook during the company’s most recent financial quarter. The report and protest also come as Facebook prepares to announce its financial earnings for that same quarter.

The report references a March study from the Center for Countering Digital Hate (CCDH) that found a small group of accounts – known as the “dirty dozen” – is responsible for more than 73% of anti-vaccine content across social media platforms, including Facebook. That report recently drew attention from the White House, and Joe Biden has condemned Facebook and other tech companies for failing to take action.

Facebook banned misinformation about vaccines from the platform in February of 2021, but critics say many posts slip through the platform’s filters and reach audiences of millions without being removed.

It also has introduced a number of rules relating to Covid-19 specifically, banning posts that question the severity of the disease, deny its existence, or argue that the vaccine has more risks than the virus. Still, the Real Oversight Board found that often such content has been able to remain on the platform and even make its way into the most-shared posts.

According to the Real Oversight Board’s report, a large share of the misinformation about the Covid vaccines comes from a few prolific accounts, and continues to be among the platform’s best performing and most widely shared content. It analyzed the top 10 posts on each weekday over the last quarter and found the majority of those originated from just five identified “superspreaders” of misinformation.

“When it comes to Covid disinformation, the vast majority of content comes from an extremely small group of highly visible users, making it far easier to combat it than Facebook admits,” the board said, concluding that Facebook is “continuing to profit from hate and deadly disinformation”.

The group has called on Facebook to remove the users from the platform or alter its algorithm to disable engagement with the offending accounts. Facebook did not immediately respond to request for comment, but has stated in the past it has removed more than 18m pieces of Covid misinformation.

Congress has also taken note of the spread of vaccine misinformation on Facebook and other platforms, with the Democratic senator Amy Klobuchar introducing a bill that would target platforms whose algorithms promotes health misinformation related to an “existing public health emergency”.

The bill, called the Health Misinformation Act, would remove protections provided by the internet law Section 230, which prevent platforms from being sued over content posted by their users in such cases.

“For far too long, online platforms have not done enough to protect the health of Americans,” Klobuchar said in a statement on the bill. “These are some of the biggest, richest companies in the world, and they must do more to prevent the spread of deadly vaccine misinformation.”

Source link

Continue Reading

Technology

Workday shares slide following claims Amazon ditched company-wide HR system • The Register

Published

on

Amazon has halted plans to roll out a company-wide HR system based on SaaS from Workday, highlighting the challenges of migrating to the in-vogue application model.

A deal between the megacorp and Workday, an enterprise application interloper, was signed in 2017 with Amazon HR veep Beth Galetti at the time declaring: “Workday is an HR cloud leader that provides an innovative, customer-focused HCM system that will support Amazon as we continue to hire employees around the world.”

Three years later, the Seattle book-seller-cum-enterprise-juggernaut has changed its tune. According to reports, a migration from Oracle’s PeopleSoft has come unstuck because Workday’s database, an in-memory system that drew inspiration from SAP’s HANA, did not scale to the needs of Amazon’s increasing number of employees. In 2017, around 300,000 worked for the firm worldwide. Now it is around 800,000 in the US and 1.3 million worldwide.

Reports claim that some Amazon businesses, like streaming platform Twitch, are using Workday’s HR system, but for the main part, the organisation still relies on Oracle’s PeopleSoft.

In a blogpost, Workday said it and Amazon had both “mutually agreed to discontinue Amazon’s Workday Human Capital Management deployment,” based on a decision taken “more than a year and a half ago.”

It said there was “the potential to revisit [the project] in the future” and denied the decision was “related to the scalability of the Workday system.” One of its largest retail customers supported 1.5 million workers worldwide, it said.

“At times… customers have a unique set of needs that are different from what we’re delivering for our broader customer base, as was the case with Amazon – one of the most unique and dynamic companies in the world,” the statement said.

The Register has approached Amazon for comment.

Workday’s shares slid by as much as 7.8 per cent when news of Amazon’s decision broke.

It is not only megacorps that seem to struggle to implement Workday software. In the last year, two North American public-sector projects have become mired in difficulties.

The State of Maine ordered an official review of its $54.6m project to renew its HR system based on software from Workday, accusing the vendor of showing “no accountability” for its part in a flawed project which could leave the state government continuing to rely on its 30-year-old mainframe-based system.

At the time, Workday told The Register it was “committed to partnering with the State of Maine to successfully complete this project.”

Meanwhile, teaching assistants at Canada’s McGill University spent Christmas waiting to be paid as the institution struggled with a new Workday HR and payroll system, according to the Association of Graduate Students Employed at McGill (AGSEM). ®

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!