Connect with us

Technology

Is that really me? The ugly truth about beauty filters | Australian lifestyle

Voice Of EU

Published

on

Popping a beautifying filter on the TikTok video she was filming seemed harmless to Mia. It made it look as though she had done her makeup, took away the hint of a double chin that always bothered her, and gently altered her bone structure to make her just that bit closer to perfect.

After a while, using filters on videos became second nature – until she caught a glimpse of herself in the mirror one day and realised, to her horror, she no longer recognised her own face.

“I just felt so ugly … It’s a very scary moment,” she says.

“When you’ve got that filter up all the time … you almost disassociate from that image in the mirror because you have this expectation that you should look like that. Then when you don’t, the self-destructive thoughts start. It’s quite vile the way that you then perceive yourself.”

Live, augmented reality filters on photo- and video-based social media platforms including TikTok, Instagram and Snapchat aren’t new but they have evolved from silly hats, puppy dog ears and comically enlarged features to more subtle beautifying effects that may not be immediately obvious to other users.

As well as adding makeup, many of the popular filters that have crept into app libraries also change the face’s proportions, generally to fit female, European beauty standards, with thinner faces, smaller noses and plump lips.

Mia, who asked for her real name to not be used, says she started using filters when one of her TikTok videos unexpectedly went viral and her audience suddenly skyrocketed.

Mia takes a selfie
Mia: ‘I was in bed crying some nights about how ugly and disgusting I felt.’ Photograph: Jackson Gallagher/The Guardian

“I’m a bigger girl,” she says. “At that point, I was around 100kg, so it was really scary for me to have people looking at me.”

As her video clocked up more than 1m views, abusive comments started pouring in. “I was getting a lot of hate,” she says, adding: “The filters on TikTok are so smooth and flawless – they don’t always look like a filter. So it felt so easier to use them, just to make me feel a little bit better … but honestly, it doesn’t even look like me.

“I was in bed crying some nights about how ugly and disgusting I felt. I’m almost 30! I shouldn’t feel that way … Imagine a 10-year-old using these filters. That’s scary to me.”

There isn’t yet a full body of research on the psychological effects of these filters but Dr Jasmine Fardouly, a body image expert from the University of New South Wales, says a study she conducted last year suggests the more unattainable the beauty standard that young people are exposed to online, the more harmful it can be …

“It’s promoting a beauty ideal that’s not attainable for you,” she says. “It’s not attainable for anyone, really, because nobody looks like that. Everybody’s faces are being made to look the exact same way.

“The fact that it’s harder to know that it’s a filter may potentially be worse for the promoting of those ideals.”

When filters are used through TikTok, Instagram or Snapchat’s in-app software, a small label with the filter name appears on the video. While the introduction of these disclaimers, both in traditional and social media, has been a key focus of policymakers, Fardouly says the research so far doesn’t suggest they work.

“The research suggests that unless you show people the actual real version of that person’s appearance, it doesn’t seem to make a difference.”

There’s a strong relationship between negative body image and the use of photo editing but Fardouly says it’s less clear which direction this correlation flows; whether people’s self-esteem is lower due to the constant augmentation of their images or if those with low body images are more likely to use these features in the first place.

“Body dissatisfaction is an important predictor for eating disorders, and is a predictor for depression and low self-esteem … There is also a link to increased interest in cosmetic surgery.”

This is something Amy Hall-Hanson has experienced first hand. The 29-year-old has struggled with body dysmorphia for many years but says she never fixated on her lips until she started using beautifying filters for every Snapchat and Instagram photo she took.

“There are a few filters that make my lips look really nice … and it actually made me want to get them done,” she says.

“I’ve even played around with overdrawing my lips, and then I’ve stopped myself and gone, ‘Why am I doing this? Like, I’ve never had a problem with my lips before in photos …

“I would look in the mirror and my lips would look so much thinner than they probably were in real life … I’ve had to take a little bit of a break from taking photos of myself just to put that buffer in place.”

Fardouly says there are no simple solutions – but there are things that social media platforms can do to mitigate potential harm.

“I think that the algorithms could be updated to make it so more diversity is being recommended and shown to people,” she says. “The ease [with] which people can use filters [is a problem]. Especially if they’re changing the structure of the face and promoting these unattainable beauty ideals, then it would be helpful to remove those filters from the platforms.”

Instagram and its parent company Meta, formerly known as Facebook, have made some moves to limit the use of what they call “face-altering” effects. While their open-source filter creation tool, Spark AR, does allow effects that alter face shape to be uploaded, they will not appear in the “Effects Gallery”, which displays the top effects on the app at that time. Filters that add makeup or smooth skin are discoverable there, and users are still able to use the search function to find face-altering effects.

“Effects that directly promote cosmetic surgery are not allowed on Instagram,” a Facebook spokesman says.

“We want AR effects to be a positive and safe experience for our community, and we have guidelines for creating and publishing effects using Spark AR. We recognise that creators predominantly use face alteration and feature augmentation to share artistic, playful and fantasy effects, and these effects are a creative way for our community to express themselves.”

Mia looks at her phone
Mia: ‘We should really embrace who we are and what we look like.’ Photograph: Jackson Gallagher/The Guardian

Snapchat does not have specific restrictions on face altering or beautifying filters submitted by users through the platform’s “Lens Lab” but a spokesperson for the company says the app’s focus on private, rather than public, communication sets it apart from other social media.

“[Snapchat] was created at a time where everyone was curating a ‘perfect’ image of themselves online. Snapchat … is private by default to create an environment where people feel free to authentically be themselves.”

The spokesperson says Snapchat has “invested in an in-house sociologist who is tasked with thinking about the impact our product and features have on our community”.

“When someone sends a snap with a lens to someone else on Snapchat, the recipient is always shown which lens it is.”

TikTok doesn’t allowed users to submit their own augmented reality effects; they are created by the company. The ethics of a number of their beautifying filters, including “faux freckles” or “glow”, have been the subject of intense debate among users.

TikTok declined Guardian Australia’s request for comment.

Fardouly says social media companies should not be held solely responsible for the harm caused by unattainable beauty standards.

“It’s kind of human nature … A lot of the problems with the platforms come from people’s desires and motivations offline as well. People have always wanted to present themselves positively to others, that’s not new.

“It’s just that social media really gives us the tools to control how we appear, and to really spend a lot of time investing in our self-presentation – and that’s where the harm can come from.”

For Mia, it came to a head when she was riding in the car with a friend and mentioned that she was considering fat-dissolving injections to try to get rid of her now practically invisible double chin.

“He looked at me like I was a crazy person,” she says. “He was like, ‘What are you talking about? You don’t have a double chin.’”

After staring at her eerily unfamiliar, imperfect face in the mirror, it occurred to Mia that she was no longer living up to the message she was using TikTok to send in the first place.

“Part of my content was about how we should really embrace who we are and what we look like,” she says. “But one day I kind of realised all of that content was a lie and was going to remain a lie as long as I was using filters.

“I just woke up one day and went, ‘No, if I’m posting content any more, I’m not posting with filters.’ And I haven’t.”

Source link

Technology

‘Hiring is a big challenge for the IT industry’

Voice Of EU

Published

on

Citrix’s Meerah Rajavel discusses the biggest challenges in today’s IT landscape, from remote working and talent shortages to security.

Meerah Rajavel is CIO at Citrix a multinational cloud computing company that provides server, application and desktop virtualisation, networking and cloud computing technologies.

Rajavel has more than 25 years’ experience at well-known tech companies such as McAfee, Cisco and Forcepoint. In her current role, Rajavel she leads the company’s IT strategy.

What are some of the biggest challenges you’re facing in the current IT landscape?

Many companies viewed remote work as a temporary solution to the pandemic and business leaders continue to push for a return to the old days where employees work in the office every day. But we just did two polls on LinkedIn and Twitter that show this isn’t likely to happen.

That’s going to challenge a lot of organisations, because working remote isn’t easy. When it comes to addressing the technical aspects of how employees can cope and remain productive, you’ve got to walk in their shoes and understand how they leverage technology to achieve business outcomes.

The key to keeping employees engaged lies in providing consistent, secure and reliable access to the systems and information they need to get work done – wherever it needs to get done. And it takes more than just flipping the switch on technologies. Culture plays a huge role in adoption.

Another big challenge IT is faced with is hiring. It’s difficult to find high quality candidates in the areas of security, design thinking and user experience, data science and analytics right now. And there are a few reasons for this. Security remains a critical priority for CIOs. In the hybrid cloud, remote working, BYOD world we now live in, more resources are required to ensure that corporate networks and assets remain safe. And demand far exceeds supply.

When it comes to design thinking, the paradigm is shifting away from user-centric thinking toward human-and-machine thinking. This requires designers to be well versed with the constructs of the possibility of artificial intelligence and machine learning and analytics in addition to user experience in their workflow design process. And that’s a skill that’s not widely available.

What are your thoughts on digital transformation in a broad sense within your industry?

In the last decade, the digitalisation of everything has caused every company – regardless of industry – to become a software company. From mobile banking and virtual healthcare visits to self-driving cars and automated food prep and delivery services, software applications are embedded into nearly every aspect of the economy and our lives.

And as they embark on digital transformation initiatives to support this trend, IT leaders need to align with their business counterparts and make sure they’re collectively approaching things from an inside-out, company-wide perspective.

For me, any type of change management needs to be broken down into three key focus areas: people, process, and technology. But it’s imperative that you start with the people because without first establishing a culture around the change, it will be difficult to achieve success.

‘Digitalisation of everything has caused every company – regardless of industry – to become a software company’
– MEERAH RAJAVEL

When it comes to people, we are particularly mindful of two important elements: culture and training. First, we’ve worked to establish a culture that encourages risk taking and organisational success over individual success. Second, we’re investing in training programmes that enable individuals to confidently transition to the new technologies or way of working and be immediately effective.

In digital transformation, technology needs to be integrated into the ‘flow’ of business, which demands IT and business to embrace shared methods and process. For process, we’ve anchored on standards like safe agile frameworks that make culture and operational efficiency key pillars of any project, to help iterative value delivery and ease of adoption across all areas of the business.

And perhaps most important, we’re investing in the technology – including our own – to help automate and integrate workflows so we can reduce time to production, minimise disruption to the business and increase effectiveness.

What are your thoughts on how sustainability can be addressed from an IT perspective?

In embracing remote work and enabling it through technology, companies can drive their ESG goals and create a more sustainable business and future.

Using digital workspace technologies, for instance, they can give employees access to everything they need to engage and be productive wherever they happen to be, reducing the need to commute and the carbon emissions associated with doing so.

They can also eliminate the need for applications and data to reside on endpoint devices and transition from energy-intensive desktops to energy-efficient laptops to increase their energy efficiency. And because no data is required to live on these devices, they can extend the life of their equipment and reduce waste.

What big tech trends do you believe are changing the world?

We did some research that showed 93pc of business leaders think the increased digital collaboration forced by remote work has amplified more diverse voices, resulting in richer idea generation. And as flexible work becomes the norm, the vast majority expect enhanced equity and collaboration to continue and fuel an era of hyper-innovation. And this excites me.

With flexible work, I see more innovation happening to converge physical and digital experiences. Whether it’s concept like metaverse or technologies like AI/ML and VR/XR integrated into the collaboration tools, all aim to enhance the experience and effectiveness for users in a location agnostic fashion.

What are your thoughts on the security challenges currently facing your industry?

The threat landscape has become much more sophisticated as a result of remote and hybrid work and protecting employees has never been more critical – or difficult.

Employees want the freedom to work when, where and how they want using the devices of their choice. And to attract and retain them in what is no doubt the tightest labour market the world has ever seen and keep them engaged and productive, IT needs to serve it up, all while ensuring corporate assets and data remain safe.

It’s among the biggest challenges we face. And to overcome it, we must move beyond thinking that security and user experience are mutually exclusive and take an intelligent approach to workspace security that combines the two following the zero-trust model to give employees simple, unified access to the apps and information they need, when and where they need it, to perform at their best.

We’ve also witnessed two major software supply chain attacks in the last 12 months with SolarWinds and Log4j.

The first is an example of how easily malicious code can be remotely injected into a simple software update delivered to thousands of enterprises and government agencies worldwide. The second highlights how threat actors are increasingly targeting the vulnerabilities in third-party software components to cause widespread havoc.

All of this underscores the importance of securing the software supply chain and adopting practices like DevSecOps.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.



Source link

Continue Reading

Technology

‘I was just really scared’: Apple AirTags lead to stalking complaints | Technology

Voice Of EU

Published

on

In early January, Brooks Nader, a 26-year-old Sports Illustrated swimsuit model, was walking home alone from a night out in New York when she received a disturbing iPhone notification telling her she was carrying an “unknown accessory”.

“This item has been moving with you for a while,” the alert read. “The owner can see its location.”

That’s when she knew “something wasn’t right”, Nader told the NBC news program Today. Nader discovered that somebody had slipped an Apple AirTag into her coat pocket while she was sitting in a restaurant earlier. Unbeknown to her, the device tracked her location for four hours before Apple’s abuse prevention system triggered the notification to her phone.

AirTags are wireless, quarter-sized Bluetooth devices that retail for $29 each. Apple launched the product in April 2021 as tracking tools that users can pair with the company’s Find My app to help locate lost belongings, like backpacks or car keys.

Yet AirTags have proven easy to abuse – police in New York, Maryland, Idaho, Colorado, Georgia, Michigan, Texas and elsewhere both within the US and internationally, have reported instances of AirTags being used to stalk individuals, as well as to target cars for theft.

Last week, the New Jersey Regional Operations & Intelligence Center issued a warning to police that AirTags posed an “inherent threat to law enforcement, as criminals could use them to identify officers’ sensitive locations” and personal routines.

AirTags have abuse-mitigation features, including pop-ups like the one Nader received, and an alarm that beeps at 60 decibels (a conversational volume) after the AirTag has been away from its owner anywhere between eight to 24 hours.

Near the end of 2021, the company released a new Android app called Tracker Detect, which was designed to help people who own Androids discover suspicious AirTags near them – yet the app must be proactively downloaded and kept active to be effective, and is only compatible with Android 9 or higher.

The outcome of more anti-stalking mechanisms is that more people are realizing they are being stalked. On 14 January, police in Montgomery county, Maryland, responded to a call from a person who was stalked home from a movie theater after an AirTag was planted on their car. Around the same time, two California women called 911 after receiving a notification that their whereabouts were being tracked while out shopping. A 30 December report from the New York Times cites seven women who believe AirTags were used to surveil them. On social media, posts from mainly women sharing their own experiences of being tracked by AirTags have drawn attention to the issue, with one TikTok video from November 2021 receiving more than 31m views.

If you suspect you’re being tracked, the conventional wisdom is not to head home, but rather call – or go to – the police. However, law enforcement responses to incidences of AirTag stalking have thus far been inconsistent, and help is not always guaranteed.

When Arizona’s Kimberly Scroop went to local police after receiving an iPhone notification that she was being tracked in September last year, “they were not interested in taking a report, they didn’t take my name or phone number,” she says. “They said if I noticed someone following me, to call the police then.”

Scroop went home and made a TikTok video about her experience being tracked, thinking she should “make as much noise as possible, so there was some public record of it” online in case anything bad happened to her. “I was having a mini panic attack, just really scared,” she says in the post that has now been viewed more than 5.5m times.

In New York, Jackie’s Law – passed in 2014 to allow police to charge people using GPS tracking devices to stalk victims even if the victims have not pressed charges – contributed to police in West Seneca’s decision to subpoena Apple for information about a case involving an AirTag attached to a victim’s car bumper. Nonetheless, Nader claims she was unable to file a report after being tracked in Tribeca, New York City, as police told her no crime had been committed.

In an official statement, Apple says it will cooperate with police “to provide any available information” about unknown AirTags people discover on their person or property. “We take customer safety very seriously and are committed to AirTags’ privacy and security,” says a spokesperson.

Ultimately, their built-in anti-stalking mechanisms and the fact that they can be easily disabled when discovered render AirTags less dangerous than other forms of stalkerware. “If you really are nefarious and evil and you really want to find someone, there are things that are much better than an AirTag,” in the $100 to $300 range, says Jon Callas, director of technology projects at the Electronic Frontier Foundation.

Indeed, stalking affects an estimated 7.5 million people in the United States each year, and one in four victims report being stalked through some form of technology, according to the Stalking Prevention Awareness & Resource Center. And it’s on the rise: a 2021 international study by the security company Norton found the number of devices reporting stalkerware daily “increased markedly by 63% between September 2020 and May 2021” with the 30-day average increasing from 48,000 to 78,000 detections. There are thousands of different stalkerware variants, such as Cerberus, GPS tracking devices and Tile, a Bluetooth-enabled AirTag competitor that announced a partnership with Amazon last spring.

To Callas, the conversation around AirTags is drawing much-needed attention to the potential for technology to be misused; he hopes more people will consider the safety risks of tracking devices, regardless of how innocent they seem. “If you make a generalized technology that helps you find your lost keys, it can help you find anything,” he says, “and that includes people”.

Source link

Continue Reading

Technology

UK mulls making MSPs subject to mandatory security standards • The Register

Voice Of EU

Published

on

Small and medium-sized managed service providers (MSPs) could find themselves subject to the Network and Information Systems Regulations under government plans to tighten cybersecurity laws – and have got three months to object to the tax hikes that will follow.

Plans to amend the EU-derived Network and Information Systems Regulations (NIS) are more likely than ever to see SMEs brought into scope, as The Register reported last year when these plans were first floated.

NIS is the main law controlling security practices in the UK today. Currently a straight copy of the EU NIS Directive, one of the benefits of Brexit leapt upon by the Department for Digital, Culture, Media and Sport (DCMS) is the new ability to amend NIS’s reporting thresholds.

Bringing MSPs under NIS “would provide a baseline for expected cybersecurity provision and better protect the UK economy and critical national infrastructure from cyber security threats,” as UK.gov said in a consultation document issued on Wednesday. Its plans are for MSPs, currently not subject to NIS, to be brought into the fold. This includes defining what an MSP does, legally, and possibly ending NIS’ existing exemption on SMEs.

“The government recognises the strong need to minimise regulatory burden on small and micro-businesses particularly in a rapidly evolving industry such as this. However, recent incidents have highlighted the scale of risk that can be associated with managed service providers – regardless of their size,” said the consultation document.

In essence, if an “operator of essential services” or a critical national infrastructure business outsources something to your MSP, prepare for NIS compliance.

And the flip side: money

Enforcement of NIS is carried out by the ICO, which is getting a funding bonus if Parliament nods through the NIS amendments. Initially coming from general taxation, in time DCMS wants to “extend the existing cost recovery provisions to allow regulators (for example, Ofcom, Ofgem, and the ICO) to recover the entirety of reasonable implementation costs from the companies that they regulate.”

SMEs across the whole British economy are already familiar with this kind of “cost recovery” activity through stealth taxes such as the ICO’s data protection registration fee.

Andy Kays, chief exec of a managed detection and response firm in London called Socura, agreed that “further market intervention is required to help raise the bar to protect the UK economy.”

“However,” he added, “I do believe that interventions like Cyber Essentials, GDPR and NIS have raised the profile of cyber and data security in the UK, and have improved understanding and investment where they are applicable among businesses.”

Jake Moore, global cybersecurity advisor with Slovakian infosec firm ESET, also agreed, saying in a statement: “Essential services are desperately in need of better protection so these new laws will help direct businesses into a more secure offering with the help and direction required. Laws often may seem like they do not go far enough but digital crime is fast paced and the goal posts constantly move making such plans difficult to project or even become out of date by the time they land.”

The consultation closes on 22 April. As well as questions about money, DCMS is also asking about whether the regs should be extended to SMEs and how detailed they ought to be. Have your say via theses 66 pre-formatted questions. ®

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!