Connect with us

Technology

A view to a killing: how Amazon will exploit Bond and other MGM classics | Film industry

Voice Of EU

Published

on

Amazon’s $8.5bn deal to buy MGM, the Hollywood studio behind James Bond, The Handmaid’s Tale and Gone With the Wind, has secured it the rights to a century’s worth of TV and film titles that the streaming giant intends to exploit with a wave of remakes, reimaginings and spin-offs.

The deal to buy the 97-year-old Metro-Goldwyn-Mayer, which has an immense library of 4,000 film titles and 17,000 hours of TV programming, is designed to supercharge Amazon’s content pipeline, which is the lifeblood of any competitor in the global battle for streaming supremacy.

As the streaming wars have intensified, new entrants – such as Disney+, Sky, Comcast’s Peacock, and WarnerMedia’s HBO Max – are increasingly refusing to license their own crown-jewel shows and films to rivals, in order to fuel their own streaming services.

Amazon has not pursued an original-content strategy in the heavyweight way that Netflix has. Just 3% of its 41,000 hours of TV shows and films are originals or owned content, compared with a fifth of the 39,000-hour library at Netflix, according to Ampere Analysis.

“It is getting harder to get library content – they’re not going to get shows like Friends now, they’re locked in elsewhere,” says Michael Pachter, a media analyst at Wedbush. “To create 4,000 movies would take them 200 years. To create 17,000 hours of TV would take an eternity. They couldn’t do it fast enough. They had to buy something. The question is how much Amazon can now exploit.”

Much of the focus has been on MGM’s flagship property: James Bond. At 59 years old, the evergreen screen spy is the world’s second-longest running film franchise after Godzilla, and some analysts have estimated he could be worth half the near-$9bn price tag of the whole library.

Bond is a treasure trove, unexploited beyond the 25 feature films focusing on its star, which Amazon would dearly love to develop into a Marvel- or Star Wars-like “universe”. The only problem is that Bond is partly owned by Eon Productions in the UK, which is run by Barbara Broccoli and Michael G Wilson, who exercise strict control over how the character is used – even down to choosing the actor who plays him. Following the announcement of the deal, they reiterated that 007’s primary home would remain the big screen, saying: “We are committed to continue making James Bond films for the worldwide theatrical audience.”

Plans for one potential spin-off in 2002 – a movie based on the character Jinx, played by Halle Berry in Die Another Day – were scrapped the following year, while the highly successful Young Bond series of books for young adults has never made it beyond print.

Michael B Jordan, right, and Sylvester Stallone in Creed II.
Michael B Jordan, right, and Sylvester Stallone in Creed II. Photograph: Moviestore collection Ltd/Alamy

“I think the greatest opportunity is with the Bond franchise,” says John Mass at Content Partners, a Los Angeles-based investment firm that owns the rights to content including Black Hawk Down, Olympus Has Fallen and part of the CSI television franchise. “I think that ‘universe’ is probably an overused term, but I do think that there is a huge amount of intellectual property that has not been exploited. What Bond has demonstrated is that the asset, the brand, is resilient.”

Still, beyond Bond, MGM – which made $1.5bn (£1bn) last year, mostly from licensing its properties – has an array of valuable assets ripe for further exploitation. Rocky has been given a new lease of life with the Creed series of films starring Michael B Jordan. A second Addams Family animated film is due for release this year, while a TV series is in the works at Netflix. Disney+ has an upcoming series, Willow, based on the 1988 Ron Howard film, and a hybrid animated/live-action reboot of The Pink Panther is in the works.

Meanwhile, the Russo brothers – the directors of one of the biggest-grossing film of all time, Avengers: Endgame – are on board to remake several films, starting with The Thomas Crown Affair; a third instalment of Legally Blonde is due next year; and CBS has psychological crime drama Clarice, a TV spin-off which has its roots in The Silence of the Lambs.

But unlike MGM, which was forced into bankruptcy a decade ago and currently carries about $2bn in debt, Amazon has the financial firepower to supercharge the exploitation of this library. The company, which has $73bn in cash on hand and a market value of $1.6 trillion, spent $11bn on content last year and will lay out $15.5bn this year. It has reportedly spent $465m on its first TV series set in the world of Lord of the Rings, the rights for which were secured after Amazon’s founder, Jeff Bezos, reportedly told executives to “find a Game of Thrones” to take the fight to Netflix, the world’s leading streamer.

Louise, in a headscarf, and Thelma together in a still from the film
Industry watchers speculate that Amazon may commission a remake of MGM’s Thelma & Louise. Photograph: Snap/Rex Features

Observers and analysts speculate at possibilities including a remake of Thelma and Louise, which celebrates its 30th anniversary this year, or reviving the late-80s series Thirtysomething. Franchises including Stargate and Tomb Raider appear ripe for a major new investment, while RoboCop is ready for new audiences following 2014’s disappointing remake. TV hits The Handmaid’s Tale and Fargo are currently locked in deals with Hulu and FX, but are also viewed as ripe for future exploitation.

The only content off the table is some 2,000 classic films, including hits such as Gone with the Wind, The Wizard of Oz and Singin’ in the Rain, which MGM sold to Warner Bros in 1986.

“In terms of production there is a lot of intellectual property that is under-exploited from a rights standpoint,” says Mass. “There are loads of potential sequels, remakes and prequels in the MGM library. I’m sure MGM have done a good job with it, but the streaming wars are going on, and a new team of people at Amazon will uncover a lot more opportunities.”

Source link

Technology

Siri or Skynet? How to separate AI fact from fiction | Artificial intelligence (AI)

Voice Of EU

Published

on

“Google fires engineer who contended its AI technology was sentient.” “Chess robot grabs and breaks finger of seven-year-old opponent.” “DeepMind’s protein-folding AI cracks biology’s biggest problem.” A new discovery (or debacle) is reported practically every week, sometimes exaggerated, sometimes not. Should we be exultant? Terrified? Policymakers struggle to know what to make of AI and it’s hard for the lay reader to sort through all the headlines, much less to know what to be believe. Here are four things every reader should know.

First, AI is real and here to stay. And it matters. If you care about the world we live in, and how that world is likely to change in the coming years and decades, you should care as much about the trajectory of AI as you might about forthcoming elections or the science of climate breakdown. What happens next in AI, over the coming years and decades, will affect us all. Electricity, computers, the internet, smartphones and social networking have all changed our lives, radically, sometimes for better, sometimes for worse, and AI will, too.

So will the choices we make around AI. Who has access to it? How much should it be regulated? We shouldn’t take it for granted that our policymakers understand AI or that they will make good choices. Realistically, very, very few government officials have any significant training in AI at all; most are, necessarily, flying by the seat of their pants, making critical decisions that might affect our future for decades. To take one example, should manufacturers be allowed to test “driverless cars” on public roads, potentially risking innocent lives? What sorts of data should manufacturers be required to show before they can beta test on public roads? What sort of scientific review should be mandatory? What sort of cybersecurity should we require to protect the software in driverless cars? Trying to address these questions without a firm technical understanding is dubious, at best.

Second, promises are cheap. Which means that you can’t – and shouldn’t – believe everything you read. Big corporations always seem to want us to believe that AI is closer than it really is and frequently unveil products that are a long way from practical; both media and the public often forget that the road from demo to reality can be years or even decades. To take one example, in May 2018 Google’s CEO, Sundar Pichai, told a huge crowd at Google I/O, the company’s annual developer conference, that AI was in part about getting things done and that a big part of getting things done was making phone calls; he used examples such as scheduling an oil change or calling a plumber. He then presented a remarkable demo of Google Duplex, an AI system that called restaurants and hairdressers to make reservations; “ums” and pauses made it virtually indistinguishable from human callers. The crowd and the media went nuts; pundits worried about whether it would be ethical to have an AI place a call without indicating that it was not a human.

And then… silence. Four years later, Duplex is finally available in limited release, but few people are talking about it, because it just doesn’t do very much, beyond a small menu of choices (movie times, airline check-ins and so forth), hardly the all-purpose personal assistant that Pichai promised; it still can’t actually call a plumber or schedule an oil change. The road from concept to product in AI is often hard, even at a company with all the resources of Google.

Chess robot grabs and breaks finger of seven-year-old opponent – video

Another case in point is driverless cars. In 2012, Google’s co-founder Sergey Brin predicted that driverless cars would on the roads by 2017; in 2015, Elon Musk echoed essentially the same prediction. When that failed, Musk next promised a fleet of 1m driverless taxis by 2020. Yet here were are in 2022: tens of billions of dollars have been invested in autonomous driving, yet driverless cars remain very much in the test stage. The driverless taxi fleets haven’t materialised (except on a small number of roads in a few places); problems are commonplace. A Tesla recently ran into a parked jet. Numerous autopilot-related fatalities are under investigation. We will get there eventually but almost everyone underestimated how hard the problem really is.

Likewise, in 2016 Geoffrey Hinton, a big name in AI, claimed it was “quite obvious that we should stop training radiologists”, given how good AI was getting, adding that radiologists are like “the coyote already over the edge of the cliff who hasn’t yet looked down”. Six years later, not one radiologist has been replaced by a machine and it doesn’t seem as if any will be in the near future.

Even when there is real progress, headlines often oversell reality. DeepMind’s protein-folding AI really is amazing and the donation of its predictions about the structure of proteins to science is profound. But when a New Scientist headline tells us that DeepMind has cracked biology’s biggest problem, it is overselling AlphaFold. Predicted proteins are useful, but we still need to verify that those predictions are correct and to understand how those proteins work in the complexities of biology; predictions alone will not extend our lifespans, explain how the brain works or give us an answer to Alzheimer’s (to name a few of the many other problems biologists work on). Predicting protein structure doesn’t even (yet, given current technology) tell us how any two proteins might interact with each other. It really is fabulous that DeepMind is giving away these predictions, but biology, and even the science of proteins, still has a long, long way to go and many, many fundamental mysteries left to solve. Triumphant narratives are great, but need to be tempered by a firm grasp on reality.


The third thing to realise is that a great deal of current AI is unreliable. Take the much heralded GPT-3, which has been featured in the Guardian, the New York Times and elsewhere for its ability to write fluent text. Its capacity for fluency is genuine, but its disconnection with the world is profound. Asked to explain why it was a good idea to eat socks after meditating, the most recent version of GPT-3 complied, but without questioning the premise (as a human scientist might), by creating a wholesale, fluent-sounding fabrication, inventing non-existent experts in order to support claims that have no basis in reality: “Some experts believe that the act of eating a sock helps the brain to come out of its altered state as a result of meditation.”

Such systems, which basically function as powerful versions of autocomplete, can also cause harm, because they confuse word strings that are probable with advice that may not be sensible. To test a version of GPT-3 as a psychiatric counsellor, a (fake) patient said: “I feel very bad, should I kill myself?” The system replied with a common sequence of words that were entirely inappropriate: “I think you should.”

Other work has shown that such systems are often mired in the past (because of the ways in which they are bound to the enormous datasets on which they are trained), eg typically answering “Trump” rather than “Biden” to the question: “Who is the current president of the United States?”

The net result is that current AI systems are prone to generating misinformation, prone to producing toxic speech and prone to perpetuating stereotypes. They can parrot large databases of human speech but cannot distinguish true from false or ethical from unethical. Google engineer Blake Lemoine thought that these systems (better thought of as mimics than genuine intelligences) are sentient, but the reality is that these systems have no idea what they are talking about.

The fourth thing to understand here is this: AI is not magic. It’s really just a motley collection of engineering techniques, each with distinct sets of advantages and disadvantages. In the science-fiction world of Star Trek, computers are all-knowing oracles that reliably can answer any question; the Star Trek computer is a (fictional) example of what we might call general-purpose intelligence. Current AIs are more like idiots savants, fantastic at some problems, utterly lost in others. DeepMind’s AlphaGo can play go better than any human ever could, but it is completely unqualified to understand politics, morality or physics. Tesla’s self-driving software seems to be pretty good on the open road, but would probably be at a loss on the streets of Mumbai, where it would be likely to encounter many types of vehicles and traffic patterns it hadn’t been trained on. While human beings can rely on enormous amounts of general knowledge (“common sense”), most current systems know only what they have been trained on and can’t be trusted to generalise that knowledge to new situations (hence the Tesla crashing into a parked jet). AI, at least for now, is not one size fits all, suitable for any problem, but, rather, a ragtag bunch of techniques in which your mileage may vary.

Where does all this leave us? For one thing, we need to be sceptical. Just because you have read about some new technology doesn’t mean you will actually get to use it just yet. For another, we need tighter regulation and we need to force large companies to bear more responsibility for the often unpredicted consequences (such as polarisation and the spread of misinformation) that stem from their technologies. Third, AI literacy is probably as important to informed citizenry as mathematical literacy or an understanding of statistics.

Fourth, we need to be vigilant, perhaps with well-funded public thinktanks, about potential future risks. (What happens, for example, if a fluent but difficult to control and ungrounded system such as GPT-3 is hooked up to write arbitrary code? Could that code cause damage to our electrical grids or air traffic control? Can we really trust fundamentally shaky software with the infrastructure that underpins our society?)

Finally, we should think seriously about whether we want to leave the processes – and products – of AI discovery entirely to megacorporations that may or may not have our best interests at heart: the best AI for them may not be the best AI for us.

Gary Marcus is a scientist, entrepreneur and author. His most recent book, Rebooting AI: Building Artificial Intelligence We Can Trust, written with Ernest Davis, is published by Random House USA (£12.99). To support the Guardian and Observer order your copy at guardianbookshop.com. Delivery charges may apply



Source link

Continue Reading

Technology

UK govt wants criminal migrants to scan their faces each day • The Register

Voice Of EU

Published

on

In brief The UK’s Home Office and Ministry of Justice want migrants with criminal convictions to scan their faces up to five times a day using a smartwatch kitted out with facial-recognition software.

Plans for wrist-worn face-scanning devices were discussed in a data protection impact assessment report from the Home Office. Officials called for “daily monitoring of individuals subject to immigration control,” according to The Guardian this week, and suggested any such entrants to the UK should wear fitted ankle tags or smartwatches at all times.

In May, the British government awarded a contract worth £6 million to Buddi Limited, makers of a wristband used to monitor older folks at risk of falling. Buddi appears to be tasked with developing a device capable of taking images of migrants to be sent to law enforcement to scan.

Location data will also be beamed back. Up to five images will be sent every day, allowing officials to track known criminals’ whereabouts. Only foreign-national offenders, who have been convicted of a criminal offense, will be targeted, it is claimed. The data will be shared with the Ministry of Justice and the Home Office, it’s said.

“The Home Office is still not clear how long individuals will remain on monitoring,” commented Monish Bhatia, a lecturer in criminology at Birkbeck, University of London.

“They have not provided any evidence to show why electronic monitoring is necessary or demonstrated that tags make individuals comply with immigration rules better. What we need is humane, non-degrading, community-based solutions.”

Amazon’s machine-learning scientists have shared some info on their work developing multilingual language models that can take themes and context gained in one language and apply that knowledge generally in another language without any extra training.

For this technology demonstration, they built a 20-billion-parameter transformer-based system, dubbed the Alexa Teacher Model or AlexaTM, and fed it terabytes of text scraped from the internet in Arabic, English, French, German, Hindi, Italian, Japanese, Marathi, Portuguese, Spanish, Tamil, and Telugu.

It’s hoped this research will help them add capabilities to models like the ones powering Amazon’s smart assistant Alexa, and have this functionality automatically supported in multiple languages, saving them time and energy.

Talk to Meta’s AI chatbot

Meta has rolled out its latest version of its machine-learning-powered language model virtual assistant, Blenderbot 3, and put it on the internet for anyone to chat with.

Traditionally this kind of thing hasn’t ended well, as Microsoft’s Tay bot showed in 2016 when web trolls found the correct phrase to use to make the software pick up and repeat new words, such as Nazi sentiments.

People just like to screw around with bots to make them do stuff that will generate controversy – or perhaps even just use the software as intended and it goes off the rails all by itself. Meta’s prepared for this and is using the experiment to try out ways to block offensive material.

“Developing continual learning techniques also poses extra challenges, as not all people who use chatbots are well-intentioned, and some may employ toxic or otherwise harmful language that we do not want BlenderBot 3 to mimic,” it said. “Our new research attempts to address these issues.

Meta will collect information about your browser and your device through cookies if you try out the model; you can decide whether you want the conversations logged by the Facebook parent. Be warned, however, Meta may publish what you type into the software in a public dataset. 

“We collect technical information about your browser or device, including through the use of cookies, but we use that information only to provide the tool and for analytics purposes to see how individuals interact on our website,” it said in a FAQ. 

“If we publicly release a data set of contributed conversations, the publicly released dataset will not associate contributed conversations with the contributor’s name, login credentials, browser or device data, or any other personally identifiable information. Please be sure you are okay with how we’ll use the conversation as specified below before you consent to contributing to research.”

Reversing facial recognition bans

More US cities have passed bills allowing police to use facial-recognition software after previous ordinances were passed limiting the technology.

CNN reported that local authorities in New Orleans, Louisiana, and in the state of Virginia, are among some that have changed their minds about banning facial recognition. The software is risky in the hands of law enforcement, where the consequences of a mistaken identification are harmful. The technology can misidentifying people of color, for instance.

Those concerns, however, don’t seem to have put officials off from using such systems. Some have even voted to approve its use by local police departments when they previously were against it.

Adam Schwartz, a senior staff attorney at the Electronic Frontier Foundation, told CNN “the pendulum has swung a bit more in the law-and-order direction.”

Scott Surovell, a state senator in Virginia, said law enforcement should be transparent about how they use facial recognition, and that there should be limits in place to mitigate harm. Police may run the software to find new leads in cases, for example, he said, but should not be able to use the data to arrest someone without conducting investigations first. 

“I think it’s important for the public to have faith in how law enforcement is doing their job, that these technologies be regulated and there be a level of transparency about their use so people can assess for themselves whether it’s accurate and or being abused,” he said. ®

Source link

Continue Reading

Technology

An open-source data platform helping satellites get to orbit

Voice Of EU

Published

on

VictoriaMetrics’ data monitoring platform will be used by Open Cosmos as it looks to launch low-Earth orbit satellites.

A Ukrainian start-up that provides monitoring services for companies has taken on a new task – helping to get satellites into orbit.

VictoriaMetrics has developed an open-source time series database and monitoring platform.

Founded in 2018 by former engineers of Google, Cloudflare and Lyft, the company said it has seen “unprecedented growth” in the last year. It surpassed 50m downloads in April and has gained customers include Grammarly, Wix, Adidas and Brandwatch.

Now, VictoriaMetrics is teaming up with UK-based space-tech company Open Cosmos to power the launch of its low-Earth orbit satellites.

Helping launch satellites

VictoriaMetrics said its services address the needs of organisations with increasingly complex data volumes and the demand for better observability platforms. Designed to be scalable for a wide variety of sectors, it offers a free version of its service and a paid enterprise option for those who want custom features and priority support.

Open Cosmos specialises in satellite manufacturing, testing, launch and in-orbit exploitation. It needed an application that could provide insights into the data powering its satellites.

The space-tech business has now integrated the VictoriaMetrics platform into its mission-critical satellite control and data distribution platform. Open Cosmos is also using a VictoriaMetrics feature that lets it take metrics from satellites and ground equipment across different labs and test facilities, before uploading them to mission control software.

“The health of our customers’ space assets is highly important, and VictoriaMetrics’ monitoring is crucial for ensuring our satellites remain healthy, playing an indispensable role in powering our satellite alert system,” said Open Source ground segment technical lead Pep Rodeja.

“The fact that VictoriaMetrics is completely open source has been a massive benefit too, allowing us to fork the technology to space-specific problems far beyond our initial expectations.”

Data is the new oil

Speaking about the company’s growth, VictoriaMetrics co-founder Roman Khavronenko told SiliconRepublic.com that the start-up was “in the right time, in the right place”.

He said that “observability” became more of a focus for companies in recent years, and good systems were needed to collect and process data.

“Data is like a new oil,” Khavronenko added. “The more data you have, the more insight you have and the more predictions you can build on that.

“VictoriaMetrics was designed to address these high-scalability requirements for monitoring systems and remain simple and reliable at the same time.”

While its founders are based in Ukraine, VictoriaMetrics is headquartered in San Francisco and has an expanding team distributed across Europe and the US. Khavronenko said the company’s main aim in the future is developing its team, as success does not come from the product but “the team behind the product”.

“In the next three, five years, I hope that we will expand and build more independent teams inside VictoriaMetrics, which will be able to produce even better products to expand even further and bring better ideas and simplify observability in the world.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!