Connect with us

Technology

Are we witnessing the dawn of post-theory science? | Artificial intelligence (AI)

Voice Of EU

Published

on

Isaac Newton apocryphally discovered his second law – the one about gravity – after an apple fell on his head. Much experimentation and data analysis later, he realised there was a fundamental relationship between force, mass and acceleration. He formulated a theory to describe that relationship – one that could be expressed as an equation, F=ma – and used it to predict the behaviour of objects other than apples. His predictions turned out to be right (if not always precise enough for those who came later).

Contrast how science is increasingly done today. Facebook’s machine learning tools predict your preferences better than any psychologist. AlphaFold, a program built by DeepMind, has produced the most accurate predictions yet of protein structures based on the amino acids they contain. Both are completely silent on why they work: why you prefer this or that information; why this sequence generates that structure.

You can’t lift a curtain and peer into the mechanism. They offer up no explanation, no set of rules for converting this into that – no theory, in a word. They just work and do so well. We witness the social effects of Facebook’s predictions daily. AlphaFold has yet to make its impact felt, but many are convinced it will change medicine.

Somewhere between Newton and Mark Zuckerberg, theory took a back seat. In 2008, Chris Anderson, the then editor-in-chief of Wired magazine, predicted its demise. So much data had accumulated, he argued, and computers were already so much better than us at finding relationships within it, that our theories were being exposed for what they were – oversimplifications of reality. Soon, the old scientific method – hypothesise, predict, test – would be relegated to the dustbin of history. We’d stop looking for the causes of things and be satisfied with correlations.

Newton and his apocryphal apple tree.
Newton and his apocryphal apple tree. Photograph: Granger Historical Picture Archive/Alamy

With the benefit of hindsight, we can say that what Anderson saw is true (he wasn’t alone). The complexity that this wealth of data has revealed to us cannot be captured by theory as traditionally understood. “We have leapfrogged over our ability to even write the theories that are going to be useful for description,” says computational neuroscientist Peter Dayan, director of the Max Planck Institute for Biological Cybernetics in Tübingen, Germany. “We don’t even know what they would look like.”

But Anderson’s prediction of the end of theory looks to have been premature – or maybe his thesis was itself an oversimplification. There are several reasons why theory refuses to die, despite the successes of such theory-free prediction engines as Facebook and AlphaFold. All are illuminating, because they force us to ask: what’s the best way to acquire knowledge and where does science go from here?

The first reason is that we’ve realised that artificial intelligences (AIs), particularly a form of machine learning called neural networks, which learn from data without having to be fed explicit instructions, are themselves fallible. Think of the prejudice that has been documented in Google’s search engines and Amazon’s hiring tools.

The second is that humans turn out to be deeply uncomfortable with theory-free science. We don’t like dealing with a black box – we want to know why.

And third, there may still be plenty of theory of the traditional kind – that is, graspable by humans – that usefully explains much but has yet to be uncovered.

So theory isn’t dead, yet, but it is changing – perhaps beyond recognition. “The theories that make sense when you have huge amounts of data look quite different from those that make sense when you have small amounts,” says Tom Griffiths, a psychologist at Princeton University.

Griffiths has been using neural nets to help him improve on existing theories in his domain, which is human decision-making. A popular theory of how people make decisions when economic risk is involved is prospect theory, which was formulated by behavioural economists Daniel Kahneman and Amos Tversky in the 1970s (it later won Kahneman a Nobel prize). The idea at its core is that people are sometimes, but not always, rational.

Daniel Kahneman, one of the founders of the prospect theory of human behaviour.
Daniel Kahneman, one of the founders of the prospect theory of human behaviour. Photograph: Richard Saker/The Observer

In Science last June, Griffiths’s group described how they trained a neural net on a vast dataset of decisions people took in 10,000 risky choice scenarios, then compared how accurately it predicted further decisions with respect to prospect theory. They found that prospect theory did pretty well, but the neural net showed its worth in highlighting where the theory broke down, that is, where its predictions failed.

These counter-examples were highly informative, Griffiths says, because they revealed more of the complexity that exists in real life. For example, humans are constantly weighing up probabilities based on incoming information, as prospect theory describes. But when there are too many competing probabilities for the brain to compute, they might switch to a different strategy – being guided by a rule of thumb, say – and a stockbroker’s rule of thumb might not be the same as that of a teenage bitcoin trader, since it is drawn from different experiences.

“We’re basically using the machine learning system to identify those cases where we’re seeing something that’s inconsistent with our theory,” Griffiths says. The bigger the dataset, the more inconsistencies the AI learns. The end result is not a theory in the traditional sense of a precise claim about how people make decisions, but a set of claims that is subject to certain constraints. A way to picture it might be as a branching tree of “if… then”-type rules, which is difficult to describe mathematically, let alone in words.

What the Princeton psychologists are discovering is still just about explainable, by extension from existing theories. But as they reveal more and more complexity, it will become less so – the logical culmination of that process being the theory-free predictive engines embodied by Facebook or AlphaFold.

Some scientists are comfortable with that, even eager for it. When voice recognition software pioneer Frederick Jelinek said: “Every time I fire a linguist, the performance of the speech recogniser goes up,” he meant that theory was holding back progress – and that was in the 1980s.

Or take protein structures. A protein’s function is largely determined by its structure, so if you want to design a drug that blocks or enhances a given protein’s action, you need to know its structure. AlphaFold was trained on structures that were derived experimentally, using techniques such as X-ray crystallography and at the moment its predictions are considered more reliable for proteins where there is some experimental data available than for those where there is none. But its reliability is improving all the time, says Janet Thornton, former director of the EMBL European Bioinformatics Institute (EMBL-EBI) near Cambridge, and it isn’t the lack of a theory that will stop drug designers using it. “What AlphaFold does is also discovery,” she says, “and it will only improve our understanding of life and therapeutics.”

The structure of a human protein modelled by the AlphaFold program.
The structure of a human protein modelled by the AlphaFold program. Photograph: EMBL-EBI/AFP/Getty Images

Others are distinctly less comfortable with where science is heading. Critics point out, for example, that neural nets can throw up spurious correlations, especially if the datasets they are trained on are small. And all datasets are biased, because scientists don’t collect data evenly or neutrally, but always with certain hypotheses or assumptions in mind, assumptions that worked their way damagingly into Google’s and Amazon’s AIs. As philosopher of science Sabina Leonelli of the University of Exeter explains: “The data landscape we’re using is incredibly skewed.”

But while these problems certainly exist, Dayan doesn’t think they’re insurmountable. He points out that humans are biased too and, unlike AIs, “in ways that are very hard to interrogate or correct”. Ultimately, if a theory produces less reliable predictions than an AI, it will be hard to argue that the machine is the more biased of the two.

A tougher obstacle to the new science may be our human need to explain the world – to talk in terms of cause and effect. In 2019, neuroscientists Bingni Brunton and Michael Beyeler of the University of Washington, Seattle, wrote that this need for interpretability may have prevented scientists from making novel insights about the brain, of the kind that only emerges from large datasets. But they also sympathised. If those insights are to be translated into useful things such as drugs and devices, they wrote, “it is imperative that computational models yield insights that are explainable to, and trusted by, clinicians, end-users and industry”.

Explainable AI”, which addresses how to bridge the interpretability gap, has become a hot topic. But that gap is only set to widen and we might instead be faced with a trade-off: how much predictability are we willing to give up for interpretability?

Sumit Chopra, an AI scientist who thinks about the application of machine learning to healthcare at New York University, gives the example of an MRI image. It takes a lot of raw data – and hence scanning time – to produce such an image, which isn’t necessarily the best use of that data if your goal is to accurately detect, say, cancer. You could train an AI to identify what smaller portion of the raw data is sufficient to produce an accurate diagnosis, as validated by other methods, and indeed Chopra’s group has done so. But radiologists and patients remain wedded to the image. “We humans are more comfortable with a 2D image that our eyes can interpret,” he says.

A patient undergoing MRI scanning in Moscow.
A patient undergoing MRI scanning in Moscow. Photograph: Valery Sharifulin/Tass

The final objection to post-theory science is that there is likely to be useful old-style theory – that is, generalisations extracted from discrete examples – that remains to be discovered and only humans can do that because it requires intuition. In other words, it requires a kind of instinctive homing in on those properties of the examples that are relevant to the general rule. One reason we consider Newton brilliant is that in order to come up with his second law he had to ignore some data. He had to imagine, for example, that things were falling in a vacuum, free of the interfering effects of air resistance.

In Nature last month, mathematician Christian Stump, of Ruhr University Bochum in Germany, called this intuitive step “the core of the creative process”. But the reason he was writing about it was to say that for the first time, an AI had pulled it off. DeepMind had built a machine-learning program that had prompted mathematicians towards new insights – new generalisations – in the mathematics of knots.

In 2022, therefore, there is almost no stage of the scientific process where AI hasn’t left its footprint. And the more we draw it into our quest for knowledge, the more it changes that quest. We’ll have to learn to live with that, but we can reassure ourselves about one thing: we’re still asking the questions. As Pablo Picasso put it in the 1960s, “computers are useless. They can only give you answers.”

Source link

Technology

Best podcasts of the week: what does the bloodsucking saga Twilight tell us about society? | Podcasts

Voice Of EU

Published

on

Picks of the week

The Big Hit Show
“Twilight is stupid; if you like it, you’re also stupid.” Why is there so much vitriol towards female Twihards? (Spoiler: misogyny.) In the first run of a series unpicking pop culture’s biggest moments – from the Obamas’ media company – Alex Pappademas starts by dissecting the wildly popular tale of teenage vampire love – and what the reactions to it say about us. Even if you’re not a fan, he raises some great questions. Hollie Richardson

Fake Psychic
Journalist Vicky Baker captivated listeners with Fake Heiress and now she investigates the fascinating story of Lamar Keene, the go-to spiritualist of 1960s America. When he hung up his questionable crystal ball he decided to reveal the tricks of supposed psychics, and Baker asks if that too was a con while pondering the authenticity of the psychics who followed. Hannah Verdier

Deep Cover: Mob Land
Animal lover, lawyer and switcher of identities Bob Cooley is the subject of Jake Halpern’s new season of the reliably mysterious podcast. Cooley was a top Chicago mob lawyer in the 70s and 80s, but what was the price when he offered to switch to the FBI’s side? This dive into corruption quizzes the key figures around him. HV

Chutzpod
This lively, engaging podcast attempts to “apply a Jewish lens to life’s toughest questions”. Hosts Rabbi Shira Stutman and one-time West Wing actor Joshua Malina cover topics ranging from reality TV shows to the Jewish “New Year of the Trees”, via the recent hostage stand-off at a synagogue in the Dallas suburb of Colleyville. Alexi Duggins

Backstage Pass with Eric Vetro
Eric Vestro is a vocal coach who’s worked with the likes of John Legend, Shawn Mendes, Camila Cabello and Ariana Grande. Here, he entertainingly lifts the curtain on their craft, talking to them about their journey in a manner that feels genuinely intimate given their pre-existing relationships. Expect some enjoyably daft voice exercises too. AD

Royally Flush investigates the monarchy’s relationship with the British slave trade.
Royally Flush investigates the monarchy’s relationship with the British slave trade. Photograph: Chris Radburn/Reuters

Chosen by Danielle Stephens

It’s fair to say that in the last couple of years the British monarchy has been put under a microscope for the way they handle their own family members, whether that be an heir to the throne and his American wife, or a prince embroiled in a civil sex abuse case. In a two parter titled Royally Flush, however, the Broccoli Productions’ Human Resources podcast goes back in time to investigate the royal family’s role in the slave trade in Britain, questioning how influential they were in trying to prevent abolition.

This is clearly a pandemic production as audio quality can sometimes be shaky, but the content is an important listen. As the country gears up to celebrate the Queen’s platinum jubilee, writer and host, Moya Lothian-McLean takes us on an unexplored trip down memory lane, presenting fascinating insights into why – despite ample evidence that the monarchy was historically instrumental in propping up the slave trade in Britain – we haven’t heard so much as a sorry coming from Buckingham Palace, according to the program maker.

Talking points

  • Never underestimate the skill that goes into making a good podcast. Over a year since Meghan and Harry’s audio production company Archewell signed a podcast deal with Spotify, they’ve only managed to release a single podcast. Hence, presumably the job ads Spotify posted this week, looking for full-time staff to help Archewell.

  • Why not try: Smartless | Screenshot

Get in touch

If you have any questions or comments about Hear Here or any of our newsletters please email newsletters@theguardian.com

Sign up to the Guide, our weekly look at the best in pop culture

Source link

Continue Reading

Technology

California’s net neutrality law dodges Big Telecom bullet • The Register

Voice Of EU

Published

on

The US Ninth Circuit Court of Appeals on Friday upheld a lower court’s refusal to block California’s net neutrality law (SB 822), affirming that state laws can regulate internet connectivity where federal law has gone silent.

The decision is a blow to the large internet service providers that challenged California’s regulations, which prohibit network practices that discriminate against lawful applications and online activities. SB 822, for example, forbids “zero-rating” programs that exempt favored services from customer data allotments, paid prioritization, and blocking or degrading service.

In 2017, under the leadership of then-chairman Ajit Pai, the US Federal Communications Commission tossed out America’s net neutrality rules, to the delight of the internet service providers that had to comply. Then in 2018, the FCC issued an order that redefined broadband internet services, treating them as “information services” under Title I of the Communications Act instead of more regulated “telecommunications services” under Title II of the Communications Act.

California lawmaker Scott Wiener (D) crafted SB 822 to implement the nixed 2015 Open Internet Order on a state level, in an effort to fill the vacuum left by the FCC’s abdication. SB 822, the “California Internet Consumer Protection and Net Neutrality Act of 2018,” was signed into law in September 2018 and promptly challenged.

In October 2018, a group of cable and telecom trade associations sued California to prevent SB 822 from being enforced. In February, 2021, Judge John Mendez of the United States District Court for Eastern California declined to grant the plaintiffs’ request for an injunction to block the law. 

So the trade groups took their case to the Ninth Circuit Court of Appeals, which has now rejected their arguments. While federal laws can preempt state laws, the FCC’s decision to reclassify broadband services has moved those services outside its authority and opened a gap that state regulators are now free to fill.

“We conclude the district court correctly denied the preliminary injunction,” the appellate ruling [PDF] says. “This is because only the invocation of federal regulatory authority can preempt state regulatory authority.

The FCC no longer has the authority to regulate in the same manner that it had when these services were classified as telecommunications services

“As the D.C. Circuit held in Mozilla, by classifying broadband internet services as information services, the FCC no longer has the authority to regulate in the same manner that it had when these services were classified as telecommunications services. The agency, therefore, cannot preempt state action, like SB 822, that protects net neutrality.”

The Electronic Frontier Foundation, which supported California in an amicus brief, celebrated the decision in a statement emailed to The Register.

“EFF is pleased that the Ninth Circuit has refused to bar enforcement of California’s pioneering net neutrality rules, recognizing a very simple principle: the federal government can’t simultaneously refuse to protect net neutrality and prevent anyone else from filling the gap,” a spokesperson said.

“Californians can breathe a sigh of relief that their state will be able to do its part to ensure fair access to the internet for all, at a time when we most need it.”

There’s still the possibility that the plaintiffs – ACA Connects, CTIA, NCTA and USTelecom – could appeal to the US Supreme Court.

In an emailed statement, the organizations told us, “We’re disappointed and will review our options. Once again, a piecemeal approach to this issue is untenable and Congress should codify national rules for an open Internet once and for all.” ®

Source link

Continue Reading

Technology

RCSI scientists find potential treatment for secondary breast cancer

Voice Of EU

Published

on

An existing drug called PARP inhibitor can be used to exploit a vulnerability in the way breast cancer cells repair their DNA, preventing spread to the brain.

For a long time, there have been limited treatment options for patients with breast cancer that has spread to the brain, sometimes leaving them with just months to live. But scientists at the Royal College of Surgeons Ireland (RCSI) have found a potential treatment using existing drugs.

By tracking the development of tumours from diagnosis to their spread to the brain, a team of researchers at RCSI University of Medicine and Health Sciences and the Beaumont RCSI Cancer Centre found a previously unknown vulnerability in the way the tumours repair their DNA.

An existing kind of drug known as a PARP inhibitor, often used to treat heritable cancers, can prevent cancer cells from repairing their DNA because of this vulnerability, culminating in the cells dying and the patient being rid of the cancer.

Prof Leonie Young, principal investigator of the RCSI study, said that breast cancer research focused on expanding treatment options for patients whose disease has spread to the brain is urgently needed to save the lives of those living with the disease.

“Our study represents an important development in getting one step closer to a potential treatment for patients with this devastating complication of breast cancer,” she said of the study, which was published in the journal Nature Communications.

Deaths caused by breast cancer are often a result of treatment relapses which lead to tumours spreading to other parts of the body, a condition known as secondary or metastatic breast cancer. This kind of cancer is particularly aggressive and lethal when it spreads to the brain.

The study was funded by Breast Cancer Ireland with support from Breast Cancer Now and Science Foundation Ireland.

It was carried out as an international collaboration with the Mayo Clinic and the University of Pittsburgh in the US. Apart from Prof Young, the other RCSI researchers were Dr Nicola Cosgrove, Dr Damir Varešlija and Prof Arnold Hill.

“By uncovering these new vulnerabilities in DNA pathways in brain metastasis, our research opens up the possibility of novel treatment strategies for patients who previously had limited targeted therapy options”, said Dr Varešlija.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!