The head of Instagram faced a grilling from US lawmakers on Wednesday over how the platform protects its youngest users, an appearance that comes amid intensifying criticism of Instagram’s impact on children and young adults.
In opening statements, Senator Richard Blumenthal promised to be “ruthless” in the hearing, saying “the time for self-policing and self-regulation is over”.
“Self policing depends on trust, and the trust is gone,” he said. “The magnitude of these problems requires both and broad solutions and accountability which has been lacking so far.”
Instagram executive Adam Mosseri, appearing before the Senate commerce consumer protection panel, defended the platform and called on lawmakers to create an industry body to better regulate social media.
Mosseri also attempted to shift blame on the wider industry, saying that “keeping young people safe online is not just about one company” and added that more young people use other apps including video platforms TikTok and YouTube.
“We all want teens to be safe online,” Mosseri said in opening statements. “The internet isn’t going away, and I believe there’s important work that we can do together – industry and policymakers – to raise the standards across the internet to better serve and protect young people.”
He called for an industry body to address “how to verify age, how to design age-appropriate experiences, and how to build parental controls” on apps. He suggested targeting the protections offered by Section 230, a federal law that shields platforms from legal liability for what users post on them.
The hearing comes as Instagram and its parent company, Meta Platforms (formerly Facebook), face global criticism over the ways their services affect the mental health, body image and online safety of younger users after the release of internal documents from former employee and whistleblower Frances Haugen.
Those papers, published by the Wall Street Journal and handed over to Congress, revealed the company’s own internal research showed Instagram negatively affected the mental health of teens, particularly regarding body image issues.
Lawmakers also pressed Mosseri to release more of the internal research referenced in those papers including a presentation about anorexia and suicidal thoughts among teens. Mosseri committed to better transparency but said that specific presentation was likely deleted due to data retention laws.
Other senators had strong words for Mosseri, bringing grave examples of harms done to children through the Instagram platform. Maria Cantwell told Mosseri of one of her constituents who claimed her young daughter was groomed by adults on Instagram, lured into sex trafficking and taken across state lines for prostitution.
She challenged Mosseri over Instagram’s terms and conditions, which state that a child’s only legal recourse over such incidents would be an arbitration – a closed court process in which matters are settled quietly with no judge, jury, or appeal option.
“That story is terrifying,” Mosseri said. “We try to be as public as we can about how well we do on difficult problems like that one, and we believe that there should be industry standards, there should be industry wide accountability, and that the best way to do that is federal legislation, which is specifically what I’m proposing today.”
Blackburn asked Mosseri to speak directly to parents whose children have been hurt, or hurt themselves, as a result of their Instagram use. “You have broken these children’s lives, and you have broken these parents’ hearts,” she said.
Meanwhile, Blumenthal, the Democratic senator and chair of the panel, asked on Wednesday that Instagram permanently scrap its development of a platform for children, which the company previously suspended amid growing opposition. Mosseri declined to commit to a permanent stop, but said any related projects would require parental consent.
Lawmakers are increasingly pushing for greater accountability. In November, a bipartisan coalition of US state attorneys general said it had opened an inquiry into Meta for promoting Instagram to children despite potential harms. And in September, US lawmakers grilled Facebook’s head of safety, Antigone Davis, about the impacts of the company’s products on children.
Ahead of Wednesday’s hearing, Instagram said it will be stricter about the types of content it recommends to teens and will nudge young users toward different areas if they dwell on one topic for a long time.
In a blogpost published on Tuesday, the social media service announced it was switching off the ability for people to tag or mention teens who do not follow them on the app and would enable teen users to to bulk delete their content and previous likes and comments.
In the blogpost, Mosseri also said Instagram was exploring controls to limit potentially harmful or sensitive material, was working on parental control tools and was launching a “Take a Break” feature, which reminds people to take a brief pause from the app after using it for a certain amount of time, in certain countries.
Blumenthal called the company’s product announcement “baby steps”.
“They are more a PR gambit than real action done within hours of the CEO testifying that are more to distract than really solve the problem,” he told Politico.
Blackburn criticized the company’s product announcement as “hollow”, saying in a statement: “Meta is attempting to shift attention from their mistakes by rolling out parental guides, use timers and content control features that consumers should have had all along.”
Mosseri also said in Wednesday’s hearing that the platform may be reintroducing a chronological news feed in 2022, a departure from the activity-driven algorithm it currently uses.
While lawmakers seemed satisfied to make some concrete steps towards formulating better social media policies, activists remained wary.
For years the company has offered “empty promises and half-baked safety measures”, said Josh Golin, executive director of children’s safety organization Fairplay.
“The bottom line is this: Instagram’s advertising business is harming children, and nothing meaningful has been done to change that,” he said. “It’s clear that self-regulation will not work. Congress must act now and regulate big tech to protect children.”
In March 2020, a new app suddenly arrived on the block. It was called Clubhouse and described as a “social audio” app that enabled its users to have real-time conversations in virtual “rooms” that could accommodate groups large and small. For a time in that disrupted, locked-down spring, Clubhouse was what Michael Lewis used to call the “New New Thing”. “The moment we saw it,” burbled Andrew Chen of the venture capital firm Andreessen Horowitz, “we were deeply excited. We believe Clubhouse will be a meaningful addition to the world, one that increases empathy and provides new ways for people to talk to each other (at a time when we need it more than ever).”
The app could not have come at a better time for social media, he continued. “It reinvents the category in all the right ways, from the content consumption experience to the way people engage each other, while giving power to its creators.” His firm put $12m of its (investors’) money behind Chen’s fantasies and followed up a year later with an investment that put a valuation of $1bn on Clubhouse, which would have made it one of the “unicorns” so prized by the Silicon Valley crowd.
This endorsement by an ostensibly serious venture capital firm undoubtedly helped to boost the hype about Clubhouse, but the main drivers – snobbery and elitism – had little to do with funding. In the beginning, for example, the app was only available for the iPhone (the BMW of the smartphone market) and membership was by invitation only. If you were lucky enough to be invited, then you could pass on an invitation to one friend. A generous colleague of mine extended hers to me and I went about signing up, until I discovered that the app unconditionally demanded access to all the contacts on my phone, whereupon I deleted it, as did my embarrassed colleague some time later.
Other invitees were more accommodating, though, and for a time Clubhouse grew like crazy. It had 600,000 registered users by December 2020 and 8.1m downloads by February 2021. In April 2021, Twitter approached it with a view to acquiring it for $4bn, but nothing came of that. And sometime after that the air began to leak out of the Clubhouse balloon. After months in which much of the chatter was about (and on) the platform, we somehow moved to a point where nobody talks about it any more. Yet Clubhouse still exists, has 10 million users and has raised more than $10m from investors. But now, in a move that smacks of desperation, it’s allowing its US users to share a link to a “live” room that enables non-members to listen in (but not to talk). And the web is alive with pieces trying to explain Clubhouse’s decline.
So what happened? A conjunction of lots of different things, probably. The most important was that vaccination programmes led to an easing of the Covid lockdowns. People who were no longer having to work from home were out and about again, talking to friends and colleagues in person. But other factors were at work too. For example, the decision to open the app to Android users in May 2021 somewhat dented the iPhone “exclusivity” that drove growth in 2020.
And, as always happens when user-generated content balloons online, abusive and unpleasant conversations proliferated. Many of the virtual rooms turned out not to be about discourse but celebrity-puffing or scamming.
As one critic put it: “So many rooms that advertise themselves as hosting big celebrities and names in the worlds of business and entertainment … turn out to be scammers … impersonating celebrities or giving a vague Ted Talk about entrepreneurship from random people who have never … set foot in the industry. Other rooms are often cover-ups for scam businesses.
“A big issue on the app were rooms that claimed to invite people with startup ideas to share with their peers and exchange advice and strategies. The rooms’ hosts would then buy the domain names these startups were looking for and sell them back to them at much higher prices to make a profit.” Clubhouse rooms became, wrote another critic, “like a late-night talkshow where celebrities come together and speak about their family, achievements, passions and plans”.
So how should we view the Clubhouse story? In the long view of history, the app might look like a shooting star, an object of brief wonder that briefly mesmerised a world afflicted with tech-induced attention-deficit disorder. A more prosaic, but possible more realistic, view is that it was just a tech solution looking for a social problem to “solve”. In other words, a typical product of Silicon Valley.
AI software capable of mapping tumor tissue more accurately to help surgeons treat and shrink prostate cancer using a laser-powered needle will soon be tested in real patients during clinical trials.
The National Cancer Institute estimated that approximately 12.6 percent of men will be diagnosed with prostate cancer at some point in their life. The risk for developing the disease rises over time for men over the age of 50. It’s one of the most curable forms of cancer, considering most cases are caught in the early stages due to regular screening tests.
Treatment for prostate cancer varies depending on the severity of the disease. Patients can undergo hormone therapy, chemotherapy, or surgery to remove tissue. Avenda Health, a medical startup founded in 2017, is developing a new type of treatment that is less invasive. The US Food and Drug Administration (FDA) granted an investigational device exemption (IDE) to the company’s invention this week, meaning it can now be used in a clinical study.
Patients will need to have an MRI scan and a targeted fusion biopsy performed first. The data is processed by Avenda’s AI algorithms in its iQuest software to map where the cancerous cells are located within the prostate. Next, the computer vision-aided model will simulate where best to insert FocalPoint, a probe armed with a laser, to help surgeons treat the patient’s tumor. The heat from the laser gently heats the cancerous cells and kills them with goal of shrinking and removing the whole tumor.
MRI images where cancer is mapped using iQuest software before and after treatment. Image Credit: Avenda Health
“Historically, prostate cancer treatments of surgery or radiation impacts critical structures like the urethra and nerves which control sexual and urinary function,” Avenda’s CEO and co-founder Shyam Natarajan told The Register. “Our focal laser ablation system, FocalPoint, which is powered by our AI-driven cancer margin software, iQuest, specifically targets tumor tissue and avoids healthy tissue. This means patients no longer lose control over these functions that are so common with traditional treatments, so quality of life is significantly improved.”
The treatment is only effective for men diagnosed with intermediate risk of prostate cancer, a classification that describes tumors being confined within the prostate only. Patients are considered high risk in cases where the cancer has spread beyond the prostate.
“This is one of the benefits of the iQuest software. Not only can it map the cancer, but it also provides decision support for the physician as they determine the best course of treatment for an individual patient. Not every patient is going to be eligible for focal therapy, and it is important for the physician to distinguish between good focal therapy candidates and not. iQuest provides useful insights for that decision making process,” Natarajan said.
Avenda received FDA clearance for its FocalPoint device in 2020. The IDE approval brings the company one step closer to bringing their product to market after clinical trial testing, Brittany Berry-Pusey, co-founder and COO of Avenda, said in a statement.
“This clinical trial will play a key role in advancing our breakthrough technology to improve prostate cancer care. With no new FDA approvals for the treatment of localized prostate cancer in more than four decades, we look forward to working alongside our clinical sites to collect the data necessary to bring iQuest and FocalPoint to market and into the patient care environment.”
Natarajan told us the company was aiming to begin clinical trials in 2023. ®
Yesterday (11 August), the department’s Rewards for Justice programme shared an alleged photo of an associate of the ransomware gang. The department said on Twitter that it is “trying to put a name to the face” and believes the individual is the hacker known as “Target”.
A request for information by the Rewards for Justice programme. Image: US Department of State/Rewards for Justice
Conti, also known as Wizard Spider, has been linked to a group believed to be based near St Petersburg, Russia. The US has labelled it a “Russian government-linked ransomware-as-a-service (RaaS) group”.
The group’s malware is believed to be responsible for more than 1,000 ransomware operations targeting critical infrastructure around the world, from law enforcement agencies to emergency medical services and dispatch centres.
In May 2021, the Conti group was behind the HSE ransomware incident that saw more than 80pc of the IT infrastructure of healthcare services across Ireland impacted. It was said to be the most serious cyberattack ever to hit the State’s critical infrastructure.
The US Department of State previously said the Conti ransomware variant is the “costliest strain of ransomware” ever documented. The FBI estimates that, as of January 2022, there had been more than 1,000 victims of attacks associated with Conti ransomware, with victim payouts exceeding $150m.
When Russia began its invasion of Ukraine earlier this year, the Conti group declared its allegiance to the Russian government. Shortly after, a Ukrainian researcher took the cybersecurity world by storm after publishing more than 60,000 internal messages of the ransomware gang.
Raj Samani, chief scientist at cybersecurity firm Rapid7, said the latest reward offer is just “the tip of the iceberg as enforcement agencies make “considerable strides” through public-private collaboration to hold cybercriminals to account.
“Announcing a reward and revealing the details of Conti members sends a message to would-be criminals that cybercrime is anything but risk-free,” said Samani.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.