Connect with us

Technology

Are we witnessing the dawn of post-theory science? | Artificial intelligence (AI)

Voice Of EU

Published

on

Isaac Newton apocryphally discovered his second law – the one about gravity – after an apple fell on his head. Much experimentation and data analysis later, he realised there was a fundamental relationship between force, mass and acceleration. He formulated a theory to describe that relationship – one that could be expressed as an equation, F=ma – and used it to predict the behaviour of objects other than apples. His predictions turned out to be right (if not always precise enough for those who came later).

Contrast how science is increasingly done today. Facebook’s machine learning tools predict your preferences better than any psychologist. AlphaFold, a program built by DeepMind, has produced the most accurate predictions yet of protein structures based on the amino acids they contain. Both are completely silent on why they work: why you prefer this or that information; why this sequence generates that structure.

You can’t lift a curtain and peer into the mechanism. They offer up no explanation, no set of rules for converting this into that – no theory, in a word. They just work and do so well. We witness the social effects of Facebook’s predictions daily. AlphaFold has yet to make its impact felt, but many are convinced it will change medicine.

Somewhere between Newton and Mark Zuckerberg, theory took a back seat. In 2008, Chris Anderson, the then editor-in-chief of Wired magazine, predicted its demise. So much data had accumulated, he argued, and computers were already so much better than us at finding relationships within it, that our theories were being exposed for what they were – oversimplifications of reality. Soon, the old scientific method – hypothesise, predict, test – would be relegated to the dustbin of history. We’d stop looking for the causes of things and be satisfied with correlations.

Newton and his apocryphal apple tree.
Newton and his apocryphal apple tree. Photograph: Granger Historical Picture Archive/Alamy

With the benefit of hindsight, we can say that what Anderson saw is true (he wasn’t alone). The complexity that this wealth of data has revealed to us cannot be captured by theory as traditionally understood. “We have leapfrogged over our ability to even write the theories that are going to be useful for description,” says computational neuroscientist Peter Dayan, director of the Max Planck Institute for Biological Cybernetics in Tübingen, Germany. “We don’t even know what they would look like.”

But Anderson’s prediction of the end of theory looks to have been premature – or maybe his thesis was itself an oversimplification. There are several reasons why theory refuses to die, despite the successes of such theory-free prediction engines as Facebook and AlphaFold. All are illuminating, because they force us to ask: what’s the best way to acquire knowledge and where does science go from here?

The first reason is that we’ve realised that artificial intelligences (AIs), particularly a form of machine learning called neural networks, which learn from data without having to be fed explicit instructions, are themselves fallible. Think of the prejudice that has been documented in Google’s search engines and Amazon’s hiring tools.

The second is that humans turn out to be deeply uncomfortable with theory-free science. We don’t like dealing with a black box – we want to know why.

And third, there may still be plenty of theory of the traditional kind – that is, graspable by humans – that usefully explains much but has yet to be uncovered.

So theory isn’t dead, yet, but it is changing – perhaps beyond recognition. “The theories that make sense when you have huge amounts of data look quite different from those that make sense when you have small amounts,” says Tom Griffiths, a psychologist at Princeton University.

Griffiths has been using neural nets to help him improve on existing theories in his domain, which is human decision-making. A popular theory of how people make decisions when economic risk is involved is prospect theory, which was formulated by behavioural economists Daniel Kahneman and Amos Tversky in the 1970s (it later won Kahneman a Nobel prize). The idea at its core is that people are sometimes, but not always, rational.

Daniel Kahneman, one of the founders of the prospect theory of human behaviour.
Daniel Kahneman, one of the founders of the prospect theory of human behaviour. Photograph: Richard Saker/The Observer

In Science last June, Griffiths’s group described how they trained a neural net on a vast dataset of decisions people took in 10,000 risky choice scenarios, then compared how accurately it predicted further decisions with respect to prospect theory. They found that prospect theory did pretty well, but the neural net showed its worth in highlighting where the theory broke down, that is, where its predictions failed.

These counter-examples were highly informative, Griffiths says, because they revealed more of the complexity that exists in real life. For example, humans are constantly weighing up probabilities based on incoming information, as prospect theory describes. But when there are too many competing probabilities for the brain to compute, they might switch to a different strategy – being guided by a rule of thumb, say – and a stockbroker’s rule of thumb might not be the same as that of a teenage bitcoin trader, since it is drawn from different experiences.

“We’re basically using the machine learning system to identify those cases where we’re seeing something that’s inconsistent with our theory,” Griffiths says. The bigger the dataset, the more inconsistencies the AI learns. The end result is not a theory in the traditional sense of a precise claim about how people make decisions, but a set of claims that is subject to certain constraints. A way to picture it might be as a branching tree of “if… then”-type rules, which is difficult to describe mathematically, let alone in words.

What the Princeton psychologists are discovering is still just about explainable, by extension from existing theories. But as they reveal more and more complexity, it will become less so – the logical culmination of that process being the theory-free predictive engines embodied by Facebook or AlphaFold.

Some scientists are comfortable with that, even eager for it. When voice recognition software pioneer Frederick Jelinek said: “Every time I fire a linguist, the performance of the speech recogniser goes up,” he meant that theory was holding back progress – and that was in the 1980s.

Or take protein structures. A protein’s function is largely determined by its structure, so if you want to design a drug that blocks or enhances a given protein’s action, you need to know its structure. AlphaFold was trained on structures that were derived experimentally, using techniques such as X-ray crystallography and at the moment its predictions are considered more reliable for proteins where there is some experimental data available than for those where there is none. But its reliability is improving all the time, says Janet Thornton, former director of the EMBL European Bioinformatics Institute (EMBL-EBI) near Cambridge, and it isn’t the lack of a theory that will stop drug designers using it. “What AlphaFold does is also discovery,” she says, “and it will only improve our understanding of life and therapeutics.”

The structure of a human protein modelled by the AlphaFold program.
The structure of a human protein modelled by the AlphaFold program. Photograph: EMBL-EBI/AFP/Getty Images

Others are distinctly less comfortable with where science is heading. Critics point out, for example, that neural nets can throw up spurious correlations, especially if the datasets they are trained on are small. And all datasets are biased, because scientists don’t collect data evenly or neutrally, but always with certain hypotheses or assumptions in mind, assumptions that worked their way damagingly into Google’s and Amazon’s AIs. As philosopher of science Sabina Leonelli of the University of Exeter explains: “The data landscape we’re using is incredibly skewed.”

But while these problems certainly exist, Dayan doesn’t think they’re insurmountable. He points out that humans are biased too and, unlike AIs, “in ways that are very hard to interrogate or correct”. Ultimately, if a theory produces less reliable predictions than an AI, it will be hard to argue that the machine is the more biased of the two.

A tougher obstacle to the new science may be our human need to explain the world – to talk in terms of cause and effect. In 2019, neuroscientists Bingni Brunton and Michael Beyeler of the University of Washington, Seattle, wrote that this need for interpretability may have prevented scientists from making novel insights about the brain, of the kind that only emerges from large datasets. But they also sympathised. If those insights are to be translated into useful things such as drugs and devices, they wrote, “it is imperative that computational models yield insights that are explainable to, and trusted by, clinicians, end-users and industry”.

Explainable AI”, which addresses how to bridge the interpretability gap, has become a hot topic. But that gap is only set to widen and we might instead be faced with a trade-off: how much predictability are we willing to give up for interpretability?

Sumit Chopra, an AI scientist who thinks about the application of machine learning to healthcare at New York University, gives the example of an MRI image. It takes a lot of raw data – and hence scanning time – to produce such an image, which isn’t necessarily the best use of that data if your goal is to accurately detect, say, cancer. You could train an AI to identify what smaller portion of the raw data is sufficient to produce an accurate diagnosis, as validated by other methods, and indeed Chopra’s group has done so. But radiologists and patients remain wedded to the image. “We humans are more comfortable with a 2D image that our eyes can interpret,” he says.

A patient undergoing MRI scanning in Moscow.
A patient undergoing MRI scanning in Moscow. Photograph: Valery Sharifulin/Tass

The final objection to post-theory science is that there is likely to be useful old-style theory – that is, generalisations extracted from discrete examples – that remains to be discovered and only humans can do that because it requires intuition. In other words, it requires a kind of instinctive homing in on those properties of the examples that are relevant to the general rule. One reason we consider Newton brilliant is that in order to come up with his second law he had to ignore some data. He had to imagine, for example, that things were falling in a vacuum, free of the interfering effects of air resistance.

In Nature last month, mathematician Christian Stump, of Ruhr University Bochum in Germany, called this intuitive step “the core of the creative process”. But the reason he was writing about it was to say that for the first time, an AI had pulled it off. DeepMind had built a machine-learning program that had prompted mathematicians towards new insights – new generalisations – in the mathematics of knots.

In 2022, therefore, there is almost no stage of the scientific process where AI hasn’t left its footprint. And the more we draw it into our quest for knowledge, the more it changes that quest. We’ll have to learn to live with that, but we can reassure ourselves about one thing: we’re still asking the questions. As Pablo Picasso put it in the 1960s, “computers are useless. They can only give you answers.”

Source link

Technology

GeckoLinux Rolling incorporates kernel 5.16 • The Register

Voice Of EU

Published

on

Most distros haven’t got to 5.15 yet, but openSUSE’s downstream project GeckoLinux boasts 5.16 of the Linux kernel and the latest Cinnamon desktop environment.

Some of the big-name distros have lots of downstream projects. Debian has been around for decades so has umpteen, including Ubuntu, which has dozens of its own, including Linux Mint, which is arguably more popular a desktop than its parent. Some have only a few, such as Fedora. As far as we know, openSUSE has just the one – GeckoLinux.

The SUSE-sponsored community distro has two main editions, the stable Leap, which has a slow-moving release cycle synched with the commercial SUSE Linux Enterprise; and Tumbleweed, its rolling-release distro, which gets substantial updates pretty much every day. GeckoLinux does its own editions of both: its remix of Leap is called “GeckoLinux Static”, and its remix of Tumbleweed is called “GeckoLinux Rolling”.

In some ways, GeckoLinux is to openSUSE as Mint is to Ubuntu. They take the upstream distro and change a few things around to give what they feel is a better desktop experience. So, while openSUSE has a unified installation disk image, which lets you pick which desktop you want, GeckoLinux uses a more Ubuntu-like model. Each disk image is a Live image, so you boot right into the desktop, give it a try, and only then install if you like what you see. That means that GeckoLinux offers multiple different disk images, one per desktop. It uses the Calamares cross-distro installation program.

SUSE has long been fond of less common Linux filesystems. When your author first used it, around version 5 or 6, it had ReiserFS when everyone else was on ext2. Later it used SGI’s XFS, and later still, Btrfs for the root partition and XFS for home. These days, it’s Btrfs and nothing but.

Not everyone is such an admirer. Even after 12 years, if you want to know how much free space you have, Btrfs doesn’t give a straight answer to the df command. It does have a btrfsck tool to repair damaged filesystems, but the developers recommend you don’t use it.

With GeckoLinux, these worries disappear because it replaces Btrfs with plain old ext4. There are some nice cosmetic touches, such as reorganised panel layouts, some quite nicely clean and restrained desktop themes, and better font rendering. Unlike Mint, though, GeckoLinux doesn’t add its own software: the final installed OS contains only standard openSUSE components from the standard openSUSE software repositories, plus some from the third-party Packman repository – which is where most openSUSE users get their multimedia codecs and things from.

We tried the new Cinnamon Rolling edition on our trusty Thinkpad T420, and it worked well. Because openSUSE doesn’t include any proprietary drivers or firmware, the machine’s Wi-Fi controller didn’t work right. (Oddly, it was detected and could see networks, but not connect to them.) So we had to use an Ethernet cable – but after an update and installing the kernel firmware package, all was well.

GeckoLinux did have problems with the machine’s hybrid Intel/Nvidia graphics once the Nvidia proprietary driver was installed. That’s not uncommon, too – Deepin and Ubuntu DDE had issues too.

This does reveal a small Gecko gotcha. Tumbleweed changes fast, and although it gets a lot of automated testing, sometimes stuff breaks. All rolling-release distros do. Component A depends on a specific version of Component B, but B just got updated and now A won’t work until it gets an update too, a day or two later.

This is where upstream Tumbleweed’s use of Btrfs can be handy. Btrfs supports copy-on-write snapshots, and openSUSE bundles a tool called Snapper which makes it easy to roll back breaking changes. This is a pivotal feature of SUSE’s MicroOS. In time, thanks to ZFS, this will come to Ubuntu too.

GeckoLinux doesn’t use Btrfs so doesn’t have snapshots, meaning when things break, you have to troubleshoot and fix it the old-fashioned way. If only for that reason, we’d recommend the GeckoLinux Static release channel.

Saying that, until we broke it by playing with GPU drivers, it worked well. Notably, it could mount the test box’s Windows partition using the new in-kernel ntfs3 driver just fine. Fedora 35 failed to boot when we tried that so that’s a definite win for GeckoLinux.

For Ubuntu or Fedora users who want to give openSUSE a go, GeckoLinux gives a slightly more familiar and straightforward installation experience. The author is especially fond of the Xfce edition and ran it for several years. The system-wide all-in-one YaST config tool in particular is a big win. ®

Source link

Continue Reading

Technology

Globalization Partners to create 160 new jobs at Galway EMEA office

Voice Of EU

Published

on

Recruitment tech company Globalization Partners is doubling its staff headcount in Galway to 320 in 2022 to aid its continuing growth.

Recruitment technology company Globalization Partners has announced plans to create 160 new jobs at its Irish base in Galway. The jobs boost will see the company double its Galway staff headcount to 320 in 2022. Jobs will be available across the board at the company’s Galway office, which serves as its EMEA centre of excellence.

The announcement comes following a major funding injection for the international firm. Globalization Partners recently raised $200m in funding from Vista Credit Partners, an organisation focused on the enterprise software, data and technology markets. The investment now values Globalization Partners at $4.2bn.

While its Galway facility will benefit from a major jobs boost, the company plans to continue to expand its share in the global remote working market. As well as the Galway growth, the company will also be expanding its teams in other locations.

Click here to check out the top sci-tech employers hiring right now.

Globalization Partners provides tech to other remote-first teams all over the world. Its platform simplifies and automates entity access, payroll, time and expense management, benefits, data and reporting, performance management, employee status changes and locally compliant contract generation. Its customer base includes CoinDesk, TaylorMade and Chime. The company’s new customer acquisition increased two-and-a-half fold from 2020 to 2021.

“Globalization Partners is uniquely positioned to capitalise on the massive opportunity we see ahead of us,” said Nicole Sahin, the company’s CEO and founder.

Sahin said her company’s combination of tech with its global team of HR, legal and customer service experts “who understand the local customs, regulatory and legal requirements in each geography we serve” were key to its success.

David Flannery, president of Vista Credit Partners said that the company’s role “in transforming the remote work industry has been truly remarkable.”

Flannery said that as a customer of Globalization Partners, his organisation had “witnessed first-hand” the company’s “best-in-class legal compliance, the quality of the user experience, and the deep expertise and support they provide,”

He added that the two companies would work to “further capitalise” on the “untapped” global remote working market, expanding their platform to new customers in new markets.

“Over the past decade, we have invested hundreds of millions of dollars in our business, building our global presence and technology platform to support the evolving and complex talent needs of growing companies,” said Bob Cahill, president of Globalization Partners. “With Vista as our investment partner, we will be able to drive further growth and continue building innovative products to meet the increasing needs of our customers at scale.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Continue Reading

Technology

How to speed up your broadband internet | Wifi

Voice Of EU

Published

on

Do a speed check

Find out the speed you are getting using a computer connected to your router via an ethernet network cable. Many routers and other devices come with one, or they cost about £5 separately.

You may also need a USB ethernet adapter (about £10) if your computer does not have a port built-in.

If you can’t connect via ethernet, use a modern phone, laptop or tablet on wifi as close to your router as possible with a clear line of sight.

Ookla’s Speedtest.net and Netflix’s Fast.com are reliable speed-testing services.

Some more advanced routers have speed testing services built into them, too. They are typically accessible via a router’s settings pages in your browser or a companion app, if they have one.

Woman setting up home office connection
Connecting your device to the router with an ethernet cable can improve speeds. Photograph: Tetra Images/Getty Images/Tetra images RF

If your broadband is slow at the router, it might be time to switch providers. Some fixed-line ISPs offer speeds in excess of 200Mbps in certain areas, while 4G/5G home broadband is an alternative.

If you are not getting near the speed your ISP advertises, you may be able to get a discount, or switch to a plan with higher speeds.

Work out what you need

When it comes to broadband the faster the better, particularly with multiple people and devices using the internet at once. However, the minimum speed needed for most online activities is fairly slow.

Video calling services, such as Zoom, typically need up to 4Mbps upload and download.

Online gaming services, such as Xbox Live, need at least 3Mbps down and 0.5Mbps up, while game streaming services need a minimum of 10Mbps down.

Video streaming, such as Netflix, needs at least 5Mbps for HD or 25Mbps for 4K content.

The median broadband speed in the UK is 50.4Mbps down and 9.8Mbps up, according to data from Ofcom in March 2021. That means that the majority of connections should be able to handle most popular services.

But bear in mind that with more than one device, or person, using your connection simultaneously, including updates and downloads when idle, slower broadband packages can quickly get choked.

Reposition your router

If your broadband connection is fast enough but your wifi is weak, there are things you can do. If possible, move the router closer to the centre of the house, or towards the rooms in which you need the strongest signal. Keep it in the open, not in a cabinet, and away from solid and metallic objects.

And try to position it away from dense walls, particularly those made out of concrete blockwork or with pipes and wires running through them.

Check your settings

Most modern routers will automatically select the best settings for your home, but you can manually check using the web interface of your router accessed through a browser on a computer. Consult the help pages for your ISP’s router for how to do so.

Wifi operating at 2.4GHz uses a range of frequency “channels”, only some of which do not overlap with each other. To reduce interference from your neighbours’ wifi, switch to channel 1, 6 or 11, which do not overlap, and therefore are less likely to cause or suffer interference.

If you have a connection under 200Mbps, enabling prioritisation or “quality of service” for your key devices, might help. This stops other things from sucking up all the available bandwidth – it will prevent a game download on an Xbox cutting off a video call on your laptop, for instance.

Set a strong wifi password using at least WPA2 security, not the lowest WEP option. This will make sure no wifi thieves can log on to your network and steal your bandwidth.

Check your devices

An internet slowdown may be down to your devices rather than your router. For older computers, upgrading the wifi adapter may help. USB wifi 5 adapters cost under £15, while the latest wifi 6 models cost about £50, but you will need a compatible router to take advantage of the extra speed.

For a non-portable device, such as a media streamer or a console, use an ethernet cable if it is close to the router, as this will be faster and more reliable than wifi.

If you have about 40 devices connected at once, consider disconnecting unnecessary ones to help provide more bandwidth for those you need most.

Weaker routers struggle with lots of devices connected at once.

Extend the wifi reach

If your wifi can’t reach parts of your house you can extend the signal of your current router with add-on gadgets.

Powerline networking devices use your home’s power cables to transmit data. They typically cost between £20 and £70. They plug into standard electrical sockets with one connected to the router via an ethernet cable, and others placed about the home providing ethernet ports and/or wifi for your devices. The speed you get through them is dependent on the condition of your electrical wiring.

Wifi extenders (£25-70) do a similar thing, but simply connect to your router via wifi, then rebroadcast it for other devices.

A network switch (under £20) can add more ethernet ports to your router if you need to connect more devices.

Upgrade to a better router

Mesh wifi systems
Mesh wifi systems come in various shapes and sizes, spreading your broadband all over your home using a series of wirelessly interconnected satellite units. Photograph: Samuel Gibbs/The Guardian

Replacing your existing router is often the most effective way to improve your wifi, but is also the most costly. Before committing to a third-party router, speak to your ISP as it may be able to provide you with a more modern one for free. Virgin and other ISPs are currently rolling out more powerful wifi 6-capable routers.

Otherwise, there are broadly two options: a beefy single router with much more powerful wifi broadcasting ability than the cheap one provided by your ISP, or a mesh system, which uses a series of satellites dotted about your home to blanket it in wifi.

Both typically use your existing router as a modem and then broadcast their own more robust wifi network.

Single unit wifi 6 routers start at about £60 but can reach the hundreds for powerful gaming-orientated devices. They connect to your old ISP box via ethernet cable, which means they are often easier to place in a more central area of your home. Running a long ethernet cable under floorboards, carpets, behind skirting boards or picture rails, or just under furniture can help keep things neat.

Good wifi 5 mesh systems start at under £100 for a triple pack of satellites, which should be enough for most homes with connections under 200Mbps. For those with faster broadband, good tri-band wifi 6 models cost about £300.

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!