Connect with us

Technology

Why the climate-wrecking craze for crypto art really is beyond satire | John Naughton

Voice Of EU

Published

on

On 24 December, the movie Don’t Look Up began streaming on Netflix following a limited release in cinemas. It’s a satirical story, directed by Adam McKay, about what happens when a lowly PhD student (played by Jennifer Lawrence) and her supervisor (Leonardo DiCaprio) discover that an Everest-size asteroid is heading for Earth. What happens is that they try to warn their fellow Earthlings about this existential threat only to find that their intended audience isn’t interested in hearing such bad news.

The movie has been widely watched but has had a pasting from critics. It was, said the Observer’s Simran Hans, a “shrill, desperately unfunny climate-change satire”. The Guardian’s Peter Bradshaw found it a “laboured, self-conscious and unrelaxed satire… like a 145-minute Saturday Night Live sketch with neither the brilliant comedy of Succession… nor the seriousness that the subject might otherwise require”.

Those complaints about crudity and OTT-ness rang a bell. It just so happens that a distinctly over-the-top satire published in 1729 attracted comparable reactions. Its author, Jonathan Swift, was an Anglo-Irish clergyman who was dean of St Patrick’s Cathedral in Dublin. Swift’s title – A Modest Proposal for Preventing the Children of Poor People from Being a Burthen to Their Parents or Country and for Making Them Beneficial to the Publick – gives only a hint of the savagery of the satire within. For the nub of the proposal was that that the impoverished Irish might ease their economic troubles by selling their children as food to rich gentlemen and ladies. “A young healthy child well nursed,” it reads, “is, at a year old, a most delicious nourishing and wholesome food, whether stewed, roasted, baked, or boiled; and I make no doubt that it will equally serve in a fricassee, or a ragout.”

You get the drift. Swift’s target was the Anglo-Irish aristocracy, often absentee landlords living on the rents of their desperately poor Irish tenantry while poncing about in Mayfair. McKay’s targets are more diffuse. He aims less at a specific class than an entire way of life – at people too stupefied by consumerism, short-termism and social media, too hypnotised by the interests of big tech corporations, to worry about the future of humankind.

Which brings us, oddly enough, to a contemporary obsession – the frenzy now surrounding non-fungible tokens or NFTs. For those who have not yet noticed this obsession, an NFT is basically a traceable code that is indelibly attached to a digital object such as an image or recording. Once someone has bought that object it becomes irrevocably registered to their ID and so they can be said to be owners of the code.

If it sounds abstruse, then that’s because it is. And yet, over the last 18 months or so, NFTs have become a sensation in the art world or, at any rate, in the part of it controlled by the big auction houses. Last June, Sotheby’s ran an auction of NFTs with prices ranging between $9,000 and $11m. In an earlier auction by Christie’s, a digital artwork by Mike Winkelmann, who calls himself Beeple, sold for $69m. Up to that point, Mr Winkelmann had never sold a print for more than a hundred bucks.

You can guess what this has triggered: an avalanche of wannabe Beeples, plus a lot of speculative hustlers who see a possibility of more modest jackpots for relatively little work (say a recording of your charming cat’s purr). Anyone can play at the game and there are useful DIY guides on the web for those interested in having a go.

So what’s not to like? Surely it’s a good thing that artists who have had a hard time earning a crust in the pandemic can get paid? It is. But there is one small snag: the technology that ensures that the NFT you’ve bought is a blockchain similar to the ones that power cryptocurrencies such as bitcoin or Ethereum. And the computation needed to provide the certification that is the USP of blockchains requires massive amounts of electricity, which comes with a correspondingly heavy carbon footprint. A single transaction on the Ethereum blockchain, for example, currently requires 232.51 kWh, which is equivalent to the power consumption of an average US household over 7.86 days.

If McKay decides that he’d like to have another go at Swiftian satire, there’s an opening for him here. Nero merely fiddled while Rome burned: we’re enthusiastically bidding for NFTs while heating up the planet.

What I’ve been reading

One more thing
Why I Traded My Fancy Climbing Gear for a Pair of Ramshackle Watches is a nice blogpost by Conrad Anker that will have meaning for anyone who is cursed (or blessed) with a collecting gene.

Back to the future
The great robotics expert Rodney Brooks does his annual review of the predictions he made in 2018.

The great escape
The Haunted California Idyll of German Writers in Exile is a lovely essay by Alex Ross in the New Yorker about the intellectuals and artists who fled Hitler and wound up in LA.

Source link

Technology

Molly Russell inquest: social media ‘almost impossible’ to keep track of, says teacher | UK news

Voice Of EU

Published

on

The headteacher of Molly Russell’s secondary school has told an inquest into the teenager’s death it is “almost impossible” to keep track of the risks posed to pupils by social media.

North London coroner’s court heard of the “complete and terrible shock” at Molly’s school after the 14-year-old killed herself in November 2017. Molly, from Harrow in north-west London, killed herself after viewing extensive amounts of online content related to suicide, depression, self-harm and anxiety.

Sue Maguire, the headteacher at Hatch End high school in Harrow, was asked how difficult it was for a school to stay on top of dangerous social media content.

She said: “There is a level where I want to say it is almost impossible to keep track of social media but we have to try, and we have to respond to the information as we receive it.”

Describing the school’s “shock” at Molly’s death, Maguire added that teachers had warned students about the “dangers of social media for a long time”.

She said: “Our experience of young people is that social media plays a hugely dominant role in their lives and it causes no end of issues. But we don’t present a stance that they should not use it. But it presents challenges to schools that we simply didn’t have 10 or 15 years ago.”

Oliver Sanders KC, representing the Russell family, asked Maguire whether the school was aware of the suicide and self-harm-related content available to students on sites such as Instagram.

Maguire said: “At the time, we were shocked when we saw it. But to say that we were completely shocked would be wrong because we had been warning young people about the dangers of social media for a long time.”

The deputy headteacher, Rebecca Cozens, who is also head of safeguarding at the school, told the inquest once young people had gone “down the rabbit hole” on social media, it was a “deep one”.

Asked by Sanders whether there was an awareness of the type of material Molly had engaged with, Cozens said: “I don’t think at that time an awareness of the depth of it and how quickly it would snowball … and the intensity then, when you’re going down that rabbit hole it is a deep one.”

On Monday a senior executive at Meta, the owner of Instagram, apologised after acknowledging that Molly had viewed content that breached the platform’s content guidelines. Elizabeth Lagone, the head of health and wellbeing policy at Meta, said: “We are sorry that Molly saw content that violated our policies, and we don’t want that on the platform.”

Last week an executive at Pinterest, another platform Molly interacted with heavily before her death, said the site was not safe when the teenager used it.

The senior coroner, Andrew Walker, told the Russell family he would deliver his conclusions by the end of the week.

  • In the UK, the youth suicide charity Papyrus can be contacted on 0800 068 4141 or email pat@papyrus-uk.org, and in the UK and Ireland Samaritans can be contacted on freephone 116 123, or email jo@samaritans.org or jo@samaritans.ie. In the US, the National Suicide Prevention Lifeline is at 800-273-8255 or chat for support. You can also text HOME to 741741 to connect with a crisis text line counsellor. In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.org

Source link

Continue Reading

Technology

Microsoft to kill off old access rules in Exchange Online • The Register

Voice Of EU

Published

on

Microsoft next month will start phasing out Client Access Rules (CARs) in Exchange Online – and will do away with this means for controlling access altogether within a year.

CARs are being replaced with Continuous Access Evaluation (CAE) for Azure Active Directory, which can apparently in “near-real time” pick up changes to access controls, user accounts, and the network environment and enforce the latest rules and policies as needed, according to a notice this week from Microsoft’s Exchange Team.

That might be useful if suspicious activity is detected, or a user account needs to be suspended, and changes to access need to be immediate.

“Today, we are announcing the retirement of CARs in Exchange Online, to be fully deprecated by September 2023,” the advisory read. “We will send Message Center posts to tenants using client access rules to start the planning process to migrate their rules.”

CARs is used by Microsoft 365 administrators to allow or block client connections to Exchange Online based on a variety of characteristics set forth in policies and rules.

“You can prevent clients from connecting to Exchange Online based on their IP address (IPv4 and IPv6), authentication type, and user property values, and the protocol, application, service, or resource that they’re using to connect,” according to a Microsoft document from earlier this year.

For example, access can be granted to Exchange resources from specific IP address, and all other clients blocked. Similarly, the system can filter access to Exchange services by department or location, or based on usernames.

Microsoft announced the replacement CAE in January, touting its ability to act fast on account revocation, disablement, or deletion; password or user location changes; the detection of nefarious activity; and other such updates, according to a blog post at the time by Alex Simons, corporate vice president of product management for the Windows giant’s identity and network access division.

“On receiving such events, app sessions are immediately interrupted and users are redirected back to Azure AD to reauthenticate or reevaluate policy,” Simons wrote. “With CAE, we have introduced a new concept of zero trust authentication session management that is built on the foundation of zero trust principles – verify explicitly and assume breach.”

With this zero-trust focus, session integrity – rather than a set session duration – is what dictates a user’s authentication lifespan, we’re told.

CAE not only aims to give enterprises greater and more immediate control over access and events, but users and managers may appreciate the speed at which changes are adopted, Microsoft claims.

“Continuous access evaluation is implemented by enabling services, like Exchange Online, SharePoint Online, and Teams, to subscribe to critical Azure AD events,” Microsoft added earlier this month. “Those events can then be evaluated and enforced near real time. Critical event evaluation doesn’t rely on Conditional Access policies so it’s available in any tenant.”

Critical events can include a user account being deleted or disabled, a user password is changed or reset, or multifactor authentication is enabled for a user. There also are other events, such as when an administrator explicitly revokes all refresh tokens for a user or a rogue insider is detected by Azure AD Identity Protection.

Finally, for workload identities, CAE enforces token revocation for workloads, among other things, according to Microsoft. ®

Source link

Continue Reading

Technology

EU proposes new liability rules around AI tech to protect consumers

Voice Of EU

Published

on

The current EU rules around product liability are more than 40 years old, meaning they do not cover harm caused by drones and other AI tech.

The European Commission has outlined a set of new proposals to enable people who are harmed by AI tech products to seek and receive compensation.

The proposals were published today (28 September). They are designed to comply with the EU’s 2021 AI Act proposal, which set out a framework for trust in AI-related technology.

Today’s AI Liability Directive aims to provide a clear and comprehensive structure for all Europeans to claim compensation in the event they are harmed by AI tech products, such as drones and robots.

The EU’s directive includes rules for businesses and consumers alike to abide by. Those who are harmed by AI products or tech can seek compensation just as they would if they were in harmed any other way.

The rules will make it easier for people who have been discriminated against by AI technology as part of the recruitment process, for example, to pursue legal action.

An example of harm that may be caused by tech products is data loss. Robots, drones, smart-home systems and other similar digital products must also comply with cybersecurity regulations around addressing vulnerabilities.

The directive builds on existing rules that manufacturers must follow around unsafe products ­– no matter how high or low-tech they are.

It is proposing a number of different strategies to modernise and adapt liability rules specifically for digital products. The existing rules around product liability in the EU are almost 40 years old, and do not cover advanced technologies such as AI.

European commissioner for internal market, Thierry Breton, said that the existing rules have “been a cornerstone of the internal market for four decades”.

“Today’s proposal will make it fit to respond to the challenges of the decades to come. The new rules will reflect global value chains, foster innovation and consumer trust, and provide stronger legal certainty for businesses involved in the green and digital transition.”

Vice-president for values and transparency, Věra Jourová, said that for AI tech to thrive in the EU, it is important for people to trust digital innovation.

She added that the new proposals would give customers “tools for remedies in case of damage caused by AI so that they have the same level of protection as with traditional technologies”. The rules will also “ensure legal certainty” for the EU’s internal market.

As well as consumer protection, the proposals are designed to foster innovation. They have laid down guarantees for the AI sector through the introduction of measures such as the right to fight a liability claim based on a presumption of causality.

The AI Liability Directive will need to be agreed with EU countries and lawmakers before it can become law.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!