Connect with us

Technology

100,000 happy pictures: a new tool in the cyber ‘arms race’ against child sexual abusers | Technology

Voice Of EU

Published

on

Leading Senior Constable Dr Janis Dalins is looking for 100,000 happy images of children – a toddler in a sandpit, a nine-year-old winning an award at school, a sullen teenager unwrapping a present at Christmas and pretending not to care.

The search for these safe, happy pictures is the goal of a new campaign to crowdsource a database of ethically obtained images that Dalins hopes will help build better investigative tools to use in the fight against what some have called a “tsunami” of child sexual assault material online.

Dalins is the co-director of AiLecs lab, a collaboration between Monash University and the Australian federal police, which builds artificial intelligence technologies for use by law enforcement.

In its new My Pictures Matter campaign, people above 18 are being asked to share safe photos of themselves at different stages of their childhood. Once uploaded with information identifying the age and person in the image, these will go into a database of other safe images. Eventually a machine learning algorithm will be made to read this album again and again until it learns what a child looks like. Then it can go looking for them.

The algorithm will be used when a computer is seized from a person suspected of possessing child sexual abuse material to quickly point to where they are most likely to find images of children– an otherwise slow and labour-intensive process that Dalins encountered while working in digital forensics.

“It was totally unpredictable,” he says. “A person gets caught and you think you’ll find a couple hundred pictures, but it turns out this guy is a massive hoarder and that’s when we’d spend days, weeks, months sorting through this stuff.”

“That’s where the triaging comes in; [the AI] says if you want to look for this stuff, look here first because the stuff that is likely bad is what you should be seeing first.” It will then be up to an investigator to review each image flagged by the algorithm.

Monash University will retain ownership of the photograph database and will impose strict restrictions on access.

The AiLecs project is small and targeted but is among a growing number of machine learning algorithms law enforcement, NGOs, business and regulatory authorities are deploying to combat the spread of child sexual abuse material online.

These include those like SAFER, an algorithm developed by not-for-profit group Thorn that runs on a company’s servers and identifies images at the point of upload and web-crawlers like that operated by Project Arachnid that trawls the internet looking for new troves of known child sexual abuse material.

Whatever their function, Dalins says the proliferation of these algorithms is part of a wider technological “arms race” between child sexual offenders and authorities.

“It’s a classic scenario – the same thing happens in cybersecurity: you build a better encryption standard, a better firewall, then someone, somewhere tries to find their way around it,” he says.

“[Online child abusers] were some of the most security-conscious people online. They were far more advanced than the terrorists, back in my day.”

‘A veritable tsunami’

It is an uncomfortable reality that there is more child sexual abuse material being shared online today that at any time since the internet was launched in 1983.

Authorities in the UK have confronted a 15-fold increase in reports of online child sexual abuse material in the past decade. In Australia the eSafety Commission described a 129% spike in reports during the early stages of the pandemic as “veritable tsunami of this shocking material washing across the internet”.

The acting esafety commissioner, Toby Dagg, told Guardian Australia that the issue was a “global problem” with similar spikes recorded during the pandemic in Europe and the US.

“It’s massive,” Dagg says. “My personal view is that it is a slow-rolling catastrophe that doesn’t show any sign of slowing soon.”

Though there is a common perception that offenders are limited to the back alleys of the internet – the so-called dark web, which is heavily watched by law enforcement agencies – Dagg says there has been considerable bleed into the commercial services people use every day.

Dagg says the full suite of services “up and down the technology stack” – social media, image sharing, forums, cloud sharing, encryption, hosting services – are being exploited by offenders, particularly where “safety hasn’t been embraced as a core tenet of industry”.

The flood of reports about child sexual abuse material has come as these services have begun to look for it on their systems – most material detected today is already known to authorities as offenders collect and trade them as “sets”.

Weekend app

As many of these internet companies are based in the US, their reports are made to the National Centre for Missing and Exploited Children (NCMEC), a non-profit organisation that coordinates reports on the matter – and the results from 2021 are telling. Facebook reported 22m instances of child abuse imagery on its servers in 2021. Apple, meanwhile, disclosed just 160.

These reports, however, do not immediately translate into takedowns – each has to be investigated first. Even where entities like Facebook make a good faith effort to report child sexual abuse material on their systems, the sheer volume is overwhelming for authorities.

“It’s happening, it’s happening at scale and as a consequence, you have to conclude that something has failed,” Dagg says. “We are evangelists for the idea of safety by design, that safety should be built into a new service when bringing it to market.”

A fundamental design flaw

How this situation developed owes much to how the internet was built.

Historically, the spread of child sexual abuse material in Australia was limited owing to a combination of factors, including restrictive laws that controlled the importation of adult content.

Offenders often exploited existing adult entertainment supply chains to import this material and needed to form trusted networks with other like-minded individuals to obtain it.

This meant that when one was caught, all were caught.

The advent of the internet changed everything when it created a frictionless medium of communication where images, video and text could be shared near instantaneously to anyone, anywhere in the world.

Figure at computer in the dark
Experts say the advent of the internet has allowed offenders to become very effective at finding ways to share libraries of child sexual abuse content and form dedicated communities. Photograph: John Williams/Alamy

University of New South Wales criminologist Michael Salter says the development of social media only took this a step further.

“It’s a bit like setting up a kindergarten in a nightclub. Bad things are going to happen,” he says.

Slater says a “naive futurism” among the early architects of the internet assumed the best of every user and failed to consider how bad faith actors might exploit the systems they were building.

Decades later, offenders have become very effective at finding ways to share libraries of content and form dedicated communities.

Slater says this legacy lives on, as many services do not look for child sexual abuse material in their systems and those that do often scan their servers periodically rather than take preventive steps like scanning files at the point of upload.

Meanwhile, as authorities catch up to this reality, there are also murky new frontiers being opened up by technology.

Lara Christensen, a senior lecturer in criminology with the University of the Sunshine Coast, says “virtual child sexual assault material” – video, images or text of any person who is or appears to be a child – poses new challenges.

“The key words there are ‘appears to be’,” Christensen says. “Australian legislation extends beyond protecting actual children and it acknowledges it could be a gateway to other material.”

Though this kind of material has existed for some years, Christensen’s concern is that more sophisticated technologies are opening up a whole new spectrum of offending: realistic computer-generated images of children, real photos of children made to look fictional, deep fakes, morphed photographs and text-based stories.

She says each creates new opportunities to directly harm children and/or attempt to groom them. “It’s all about accessibility, anonymity and affordability,” Christensen says. “When you put those three things in the mix, something can become a huge problem.”

A human in the loop

Over the last decade, the complex mathematics behind algorithms combating the wave of this criminal material have evolved significantly but they are still not without issues.

One of the biggest concerns is that it’s often impossible to know where the private sector has obtained the images it has used to train its AI. These may include images of child sexual abuse or photos scraped from open social media accounts without the consent of those who uploaded them. Algorithms developed by law enforcement have traditionally relied on images of abuse captured from offenders.

This runs the risk of re-traumatising survivors whose images are being used without their consent and baking in the biases of the algorithms’ creators thanks to a problem known as “overfitting” – a situation where algorithms trained on bad or limited data return bad results.

In other words: teach an algorithm to look for apples and it may find you an Apple iPhone.

“Computers will learn exactly what you teach them,” Dalins says.

This is what the AiLecs lab is attempting to prove with its My Pictures Matter campaign: that it is possible to build these essential tools with the full consent and cooperation of those whose childhood images are being used.

But for all the advances in technology, Dalins says child sexual abuse investigation will always require human involvement.

“We’re not talking about identifying stuff so that algorithm says x and that’s what goes to court,” he says. “We’re not seeing a time in the next, five, 10 years where we would completely automate a process like this.

“You need a human in the loop.”

Source link

Technology

Meditation app Calm sacks one-fifth of staff | Meditation

Voice Of EU

Published

on

The US-based meditation app Calm has laid off 20% of its workforce, becoming the latest US tech startup to announce job cuts.

The firm’s boss, David Ko, said the company, which has now axed about 90 people from its 400-person staff, was “not immune” to the economic climate. “In building out our strategic and financial plan, we revisited the investment thesis behind every project and it became clear that we need to make changes,” he said in a memo to staff.

“I can assure you that this was not an easy decision, but it is especially difficult for a company like ours whose mission is focused on workplace mental health and wellness.”

The Calm app, founded in 2012, offers guided meditation and bedtime stories for people of all ages. It received a surge of downloads triggered by the 2020 Covid lockdowns. By the end of that year, the software company said the app had been downloaded more than 100 million times globally and had amassed over 4 million paying subscribers.

Investors valued the firm, which said it had been profitable since 2016, at $2bn.

In the memo, Ko went on: “We did not come to this decision lightly, but are confident that these changes will help us prioritize the future, focus on growth and become a more efficient organization.”

More than 500 startups have laid off staff this year, according to layoffs.fyi, a website that tracks such announcements.

Source link

Continue Reading

Technology

Let there be ambient light sensing, without data theft • The Register

Voice Of EU

Published

on

Six years after web security and privacy concerns surfaced about ambient light sensors in mobile phones and notebooks, browser boffins have finally implemented defenses.

The W3C, everyone’s favorite web standards body, began formulating an Ambient Light Events API specification back in 2012 to define how web browsers should handle data and events from ambient light sensors (ALS). Section 4 of the draft spec, “Security and privacy considerations,” was blank. It was a more carefree time.

Come 2015, the spec evolved to include acknowledgement of the possibility that ALS might allow data correlation and device fingerprinting, to the detriment of people’s privacy. And it suggested that browser makers might consider event rate limiting as a potential mitigation.

By 2016, it became clear that allowing web code to interact with device light sensors entailed privacy and security risks beyond fingerprinting. Dr Lukasz Olejnik, an independent privacy researcher and consultant, explored the possibilities in a 2016 blog post.

Olejnik cited a number of ways in which ambient light sensor readings might be abused, including data leakage, profiling, behavioral analysis, and various forms of cross-device communication.

He described a few proof-of-concept attacks, devised with the help of security researcher Artur Janc, in a 2017 post and delved into more detail in a 2020 paper [PDF].

“The attack we devised was a side-channel leak, conceptually very simple, taking advantage of the optical properties of human skin and its reflective properties,” Olejnik explained in his paper.

“Skin reflectance only accounts for the 4-7 percent emitted light but modern display screens emit light with significant luminance. We exploited these facts of nature to craft an attack that reasoned about the website content via information encoded in the light level and conveyed via the user skin, back to the browsing context tracking the light sensor readings.”

It was this technique that enabled the proof-of-concept attacks like stealing web history through inferences made from CSS changes and stealing cross origin resources, such as images or the contents of iframes.

Snail-like speed

Browser vendors responded in various ways. In May 2018, with the release of Firefox 60, Mozilla moved access to the W3C proximity and ambient light APIs behind flags, and applied further limitations in subsequent Firefox releases.

Apple simply declined to implement the API in WebKit, along with a number of other capabilities. Both Apple and Mozilla currently oppose a proposal for a generic sensor API.

Google took what Olejnik described his paper as a “more nuanced” approach, limiting the precision of sensor data.

But those working on the W3C specification and on the browsers implementing the spec recognized that such privacy protections should be formalized, to increase the likelihood the API will be widely adopted and used.

So they voted to make the imprecision of ALS data normative (standard for browsers) and to require the camera access permission as part of the ALS spec.

Those changes finally landed in the ALS spec this week. As a result, Google and perhaps other browser makers may choose to make the ALS API available by default rather than hiding it behind a flag or ignoring it entirely. ®



Source link

Continue Reading

Technology

4 supports that can help employees outside of work

Voice Of EU

Published

on

Everyone has different situations to deal with outside of the workplace. But that doesn’t mean the workplace can’t be a source of support.

Employers and governments alike are often striving to make workplaces better for everyone, whether it’s workplace wellbeing programmes or gender pay gap reporting.

However, life is about more than just the hours that are spent in work, and how an employer supports those other life challenges can be a major help.

Family-friendly benefits

Several companies have been launching new benefits and policies that help families and those trying to have children.

Job site Indeed announced a new ‘family forming’ benefit package earlier this year, which is designed to provide employees with family planning and fertility-related assistance.

The programme includes access to virtual care and a network of providers who can guide employees through their family-forming journey.

Vodafone Ireland introduced a new fertility and pregnancy policy in February 2022 that includes extended leave for pregnancy loss, fertility treatment and surrogacy.

And as of the beginning of 2022, Pinterest employees around the world started receiving a host of new parental benefits, including a minimum of 20 weeks’ parental leave, monetary assistance of up to $10,000 or local equivalent for adoptive parents, and four weeks of paid leave to employees who experience a loss through miscarriage at any point in a pregnancy.

Helping those experiencing domestic abuse

There are also ways to support employees going through a difficult time. Bank of Ireland introduced a domestic abuse leave policy earlier this year, which provides a range of supports to colleagues who may be experiencing domestic abuse.

Under the policy, the bank will provide both financial and non-financial support to colleagues, such as paid leave and flexibility with the work environment or schedule.

In emergency situations where an employee needs to immediately leave an abusive partner, the bank will help through paid emergency hotel accommodation or a salary advance.

In partnership with Women’s Aid, the company is also rolling out training to colleagues to help recognise the symptoms of abuse and provide guidance on how to take appropriate action.

Commenting on the policy, Women’s Aid CEO Sarah Benson said employers who implement policies and procedures for employees subjected to domestic abuse can help reduce the risk of survivors giving up work and increase “feelings of solidarity and support at a time when they may feel completely isolated and alone”.

A menopause policy

In 2021, Vodafone created a policy to support workers after a survey it commissioned revealed that nearly two-thirds of women who experienced menopause symptoms said it impacted them at work. A third of those who had symptoms also said they hid this at work. Half of those surveyed felt there is a stigma around talking about menopause, which is something Vodafone is seeking to combat through education for all staff.

Speaking to SiliconRepublic.com last year, Vodafone Ireland CEO Anne O’Leary said the company would roll out a training and awareness programme to all employees globally, including a toolkit to improve their understanding of menopause and provide guidance on how to support employees, colleagues and family members.

In Ireland, Vodafone employees are able to avail of leave for sickness and medical treatment, flexible working hours and additional care through the company’s employee assistance programme when going through the menopause.

Support hub for migrants

There are also initiatives to help people get their foot on the employment ladder.

Earlier this year, Tánaiste Leo Varadkar, TD launched a new service with education and employment supports for refugees, asylum-seekers and migrants.

The Pathways to Progress platform is part of the Open Doors Initiative supporting marginalised groups to access further education, employment and entrepreneurship in Ireland.

As part of the initiative, member company Siro offered a paid 12-week internship programme for six people who are refugees. The internships include job preparation, interview skills and access to the company’s online learning portals.

Open Doors Initiative CEO Jeanne McDonagh said the chance to land a meaningful job or establish a new business is key to people’s integration into Ireland, no matter what route they took to get here.

“Some are refugees, some are living in direct provision, some will have their status newly regularised, and others will come directly for work,” she said. “Our new service aims to support all migrants in finding a decent job as they prepare to enter the Irish workforce, and to support employers as they seek to build an inclusive culture in their workplaces.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!