Connect with us

Technology

One Irish multinational is in the top five giants buying up AI companies

Published

on

Irish consultancy Accenture stands among US tech giants such as Apple and Google in the race to acquire AI companies.

According to GlobalData’s deals database, five companies dominate when it comes to AI business acquisitions.

Four of these companies are US tech giants, with Apple leading the pack and Google, Microsoft and Facebook following behind.

The other company in the top five is Accenture, the Irish multinational providing consultancy and professional services to businesses worldwide.

Apple’s Siri shopping spree

Tracking mergers and acquisitions from 2016 to 2020, Apple comes out on top with 25 acquisitions of AI companies in the four-year period.

“Apple has been ramping up its acquisition of AI companies, with several deals aimed at improving Siri or creating new features on its iPhones,” said GlobalData senior analyst Nicklas Nilsson.

Nilsson said that Apple’s AI shopping spree is an effort to catch up with Google’s voice assistant and Amazon’s Alexa technology. “Siri was first on the market, but it consistently ranks below the two in terms of ‘smartness’, which is partly why Apple is far behind in smart speaker sales,” he said.

“Machine learning start-up Inductiv was acquired to improve Siri’s data, Irish voice-tech start-up Voysis was bought to improve Siri’s understanding of natural language, and PullString should make Siri easier for iOS developers to use.”

Apple has also made strategic moves to maintain its dominant position as a smartwatch maker.

“The acquisition of Xnor.ai last year was made to improve its on-edge processing capabilities, which has become important as it eliminates the need for data to be sent to the cloud, thereby improving data privacy,” said Nilsson.

Accenture’s AI buys

Support Silicon Republic

Accenture slides in at second place with 17 AI-centric acquisitions in the four-year period, sweeping up a broad set of applications for this technology.

The acquisition of Munich-based ESR Labs was announced in March 2020 to expand Accenture’s capabilities in automotive software. Also in 2020, the Irish firm completed the acquisition of Atlanta company N3 to combine its technology with the Accenture SynOps platform and enhance data-led insights for salespeople.

The acquisition of Chicago consultancy Clarity Insights announced in December 2019 added 350 employees to Accenture’s Applied Intelligence business in North America. This strategic acquisition was focused on enterprise-scale AI, analytics and automation solutions.

A chart showing the AI companies acquired by Apple, Accenture, Google, Facebook and Microsoft in 2016, 2017, 2018, 2019 and 2020.

Image: GlobalData

2017 was a big year with six AI acquisitions from Accenture, including UK company Genfour, which became part of the acquiring firm’s centre of excellence for intelligent automation.

Accenture’s appetite for acquisitions shows no sign of abating in 2021, though AI hasn’t been a specific area of focus. Among many deals already announced in the first quarter of the year is the acquisition of leadership and talent consultancy company Cirrus, while Germany’s Fable+ and California’s Imaginea were acquired to boost Accenture’s cloud offerings.

AI talent also in demand

Following Accenture in the top five are Google, Microsoft and Facebook. “The US is the leader in AI, and the dominance of US tech giants in the list of top acquirers also indicate that these companies have some defined AI objectives,” said Nilsson.

Between them, Apple, Google, Microsoft and Facebook undertook 60 acquisitions in the AI tech space from 2016 to 2020.

“AI has remained a key focus area for tech giants and growing competition to dominate the space has resulted in an acquisition spree among these companies,” added GlobalData analyst Aurojyoti Bose.

Bose also said that job analytics data from GlobalData reveals that these top five acquirers are also on a talent-hiring spree, collectively posting more than 14,000 jobs in AI during 2020 alone.

Source link

Technology

David Eagleman: ‘The working of the brain resembles drug dealers in Albuquerque’ | Neuroscience

Published

on

David Eagleman, 50, is an American neuroscientist, bestselling author and presenter of the BBC series The Brain, as well as co-founder and chief executive officer of Neosensory, which develops devices for sensory substitution. His area of speciality is brain plasticity, and that is the subject of his new book, Livewired, which examines how experience refashions the brain, and shows that it is a much more adaptable organ than previously thought.

For the past half-century or more the brain has been spoken of in terms of a computer. What are the biggest flaws with that particular model?
It’s a very seductive comparison. But in fact, what we’re looking at is three pounds of material in our skulls that is essentially a very alien kind of material to us. It doesn’t write down memories, the way we think of a computer doing it. And it is capable of figuring out its own culture and identity and making leaps into the unknown. I’m here in Silicon Valley. Everything we talk about is hardware and software. But what’s happening in the brain is what I call livewire, where you have 86bn neurons, each with 10,000 connections, and they are constantly reconfiguring every second of your life. Even by the time you get to the end of this paragraph, you’ll be a slightly different person than you were at the beginning.

In what way does the working of the brain resemble drug dealers in Albuquerque?
It’s that the brain can accomplish remarkable things without any top-down control. If a child has half their brain removed in surgery, the functions of the brain will rewire themselves on to the remaining real estate. And so I use this example of drug dealers to point out that if suddenly in Albuquerque, where I happened to grow up, there was a terrific earthquake, and half the territory was lost, the drug dealers would rearrange themselves to control the remaining territory. It’s because each one has competition with his neighbours and they fight over whatever territory exists, as opposed to a top-down council meeting where the territory is distributed. And that’s really the way to understand the brain. It’s made up of billions of neurons, each of which is competing for its own territory.

You use this colonial image a lot in the book, a sense of the processes and struggles of evolution being fought out within the brain itself.
That’s exactly right. And I think this is a point of view that’s not common in neuroscience. Usually, when we look in a neuroscience textbook, we say here are the areas of the brain and everything looks like it’s getting along just fine. It belongs exactly where it is. But the argument I make in the book is, the only reason it looks that way is because the springs are all wound tight. And the competition for each neuron – each cell in the brain to stay alive against its neighbours – is a constantly waged war. This is why when something changes in the brain, for example, if a person goes blind, or loses an arm or something, you see these massive rearrangements that happen very rapidly in the brain. It’s just as the French lost their territory in North America because the British were sending more people over.

brain waves in rem sleep
Brain waves during REM sleep. Photograph: Deco/Alamy

One of the great mysteries of the brain is the purpose of dreams. And you propose a kind of defensive theory about how the brain responds to darkness.
One of the big surprises of neuroscience was to understand how rapidly these takeovers can happen. If you blindfold somebody for an hour, you can start to see changes where touch and hearing will start taking over the visual parts of the brain. So what I realised is, because the planet rotates into darkness, the visual system alone is at a disadvantage, which is to say, you can still smell and hear and touch and taste in the dark, but you can’t see any more. I realised this puts the visual system in danger of getting taken over every night. And dreams are the brain’s way of defending that territory. About every 90 minutes a great deal of random activity is smashed into the visual system. And because that’s our visual system, we experience it as a dream, we experience it visually. Evolutionarily, this is our way of defending ourselves against visual system takeover when the planet moves into darkness.

Another mystery is consciousness. Do you think we are close to understanding what consciousness is and how it’s created?
There’s a great deal of debate about how to define consciousness, but we are essentially talking about the thing that flickers to life when you wake up in the morning. But as far as understanding why it happens, I don’t know that we’re much closer than we’ve ever been. It’s different from other scientific conundrums in that what we’re asking is, how do you take physical pieces and parts and translate that into private, subjective experience, like the redness of red, or the pain of pain or the smell of cinnamon? And so not only do we not have a theory, but we don’t really know what such a theory would look like that would explain our experience in physical or mathematical terms.

You predict that in the future we’ll be able to glean the details of a person’s life from their brains. What would that mean in terms of personal privacy and liberty?
Oh, yeah, it’s going to be a brave new world. Maybe in 100 years, maybe 500, but it’ll certainly happen. Because what we’re looking at is a physical system that gets changed and adjusted based on your experiences. What’s going on with the brain is the most complex system we’ve ever come across in our universe but fundamentally it’s physical pieces and parts and, as our computational capacities are becoming so extraordinary now, it’s just a countdown until we get there. Do we get to keep our inner thoughts private? Almost certainly we will. You can’t stick somebody in a scanner and try to ask them particular kinds of questions. But again, this will happen after our lifetime, so it’s something for the next generations to struggle with.

Do you think in the future that we’ll be able to communicate just by thinking?
Communication is a multi-step process. And so in answering your questions, I have many, many thoughts. And I’m getting it down to something that I can say that will communicate clearly what I intend. But if you were to just read my thoughts and say, “OK, give me the answer,” it would be a jumble of half-sentences and words and some random thought, like, Oh, my coffee is spilling. It’s like you wouldn’t want to read somebody’s book that hasn’t been polished by them over many iterations, but instead is burped out of their brain.

elon musk with the surgical robot from his august 2020 neuralink presentation
Elon Musk with the surgical robot from his August 2020 Neuralink presentation. Photograph: Neuralink/AFP/Getty Images

What are your views on Elon Musk’s Neuralink enterprise, which is developing implantable brain-machine interfaces?
There’s nothing new about it insofar as neuroscientists have been putting electrodes in people’s brains for at least 60 years now. The advance is in his technology, which is making the electrodes denser and also wireless, although even that part’s not new. I think it will be very useful in certain disease states, for example, epilepsy and depression, to be able to put electrodes directly in there and monitor and put activity in. But the mythology of Neuralink is that this is something we can all use to interface faster with our cellphones. I’d certainly like to text 50% faster, but am I going to get an open-head surgery? No, because there’s an expression in neurosurgery: when the air hits your brain, it’s never the same.

You didn’t start out academically in neuroscience. What led you there?
I majored in British and American literature. And that was my first love. But I got hooked on neuroscience because I took a number of philosophy courses. I found that we’d constantly get stuck in some philosophical conundrum. We’d spin ourselves into a quagmire and not be able to get out. And I thought, Wow, if we could understand the perceptual machinery by which we view the world, maybe we’d have a shot at answering some of these questions and actually making progress. When I finally discovered neuroscience, I read every book in the college library on the brain – there weren’t that many at the time – and I just never looked back.

How can we maximise our brain power, and what do you do to switch off?
There’s this myth that we only use 10% of our brain that, of course, is not true. We’re using 100% of our brain all the time. But the way information can be digested and fed to the brain can be very different. I think the next generation is going to be much smarter than we are. I have two small kids, and any time they want to know something, they ask Alexa or Google Home, and they get the answer right in the context of their curiosity. This is a big deal, because the brain is most flexible when it is curious about something and gets the answer. Regarding switching off, I never take any downtime and I don’t want to. I have a very clear sense of time pressure to do the next things. I hope I don’t die young, but I certainly act as though that is a possibility. One always has to be prepared to say goodbye, so I’m just trying to get everything done before that time.

Livewired by David Eagleman is published by Canongate (£9.99). To support the Guardian order your copy at guardianbookshop.com. Delivery charges may apply

Source link

Continue Reading

Technology

Excuse me, what just happened? Resilience is tough when your failure is due to a ‘sequence of events that was almost impossible to foresee’

Published

on

Feature When designing systems that our businesses will rely on, we do so with resilience in mind.

Twenty-five years ago, technologies like RAID and server mirroring were novel and, in some ways, non-trivial to implement; today this is no longer the case and it is a reflex action to procure multiple servers, LAN switches, firewalls, and the like to build resilient systems.

This does not, of course, guarantee us 100 per cent uptime. The law of Mr Murphy applies from time to time: if your primary firewall suffers a hardware failure, there is a tiny, but non-zero, chance that the secondary will also collapse before you finish replacing the primary.

If you have a power failure, there is a similarly micro-tangible likelihood that the generator you have tested weekly for years will choose this moment to cough stubbornly rather than roaring into life. Unless you are (or, more accurately, the nature of your business is) so risk-averse that you can justify spending on more levels of resilience to reduce the chance of an outage even further (but never, of course, to nothing).

There are occasions, though, where planning for failure becomes hard.

Let us look at a recent example. In July 2020, the main telco in Jersey had a major outage because of a problem with a device providing time service to the organisation’s network. The kicker in this event was that the failed device did not fail in the way we are all used to – by making a “bang” noise and emitting smoke; had it done so, in fact, all would have been well as the secondary unit would have taken over.

Impossible

No, this was a more devious kind of time server which only part-failed. It kept running but started serving times from about 20 years in the past (by no coincidence at all this was the factory default time setting), thus confusing network infrastructure devices and causing traffic to stop flowing.

Customer dissatisfaction was palpable, of course, but as an IT specialist one does have to feel something for the company’s technical team: how many of us would ever consider, as a possible failure case, something that the technical chief described quite correctly as a “sequence of events that was almost impossible to foresee”?

(Incidentally, in a somewhat more good-news story, stepping back a moment to our point about extra layers of resilience, the same company had previously survived three offshore cables being severed… by having a fourth).

Could monitoring tools have been put in place to see issues like this when they happen? Yes, absolutely, but the point is that to do so one would first need to identify the scenarios as something that could happen. In the sense of risk management, this type of failure – very high impact but infinitesimally unlikely – is the worst possible kind for a risk manager. There are theories and books about how one can contemplate and deal with such risks, the best-known of which is probably Nassim Nicholas Taleb’s book The Black Swan, which talks of just this kind of risk, but if you want to try to defend against the unexpected then at the very least you need to sit down with a significant number of people in a highly focused way, preferably with an expert in the field to guide and moderate, and work on identifying such possible “black swan” events.

While the black swan concept is most definitely a thing to bear in mind, there is in fact a far more common problem with systems that we consider resilient – a failure to understand how the resilience works.

One particular installation at a company with an office and two data centres had point-to-point links in a triangle between each premises, and each data centre had an internet connection. The two firewalls, one in each data centre, were configured as a resilient pair, and worked as such for years. One day internet service went down, and investigation showed that the secondary unit had lost track of the primary and had switched itself to become the primary. Having two active primaries caused split traffic flows, and hence an outage.

Predictable

In hindsight, this was completely predictable. The way the primary/secondary relationship was maintained between the devices was for the primary to send a “heartbeat” signal to the secondary every few seconds; if the secondary failed to receive the heartbeat three times, it woke up and acted as a primary. Because the devices were in separate data centres, they were connected through various pieces of technology: a LAN patch cord into a switch, into a fibre transceiver, into a telco fibre, then the same in reverse at the other end.

A fault on any one of those elements could cause the network devices to reconfigure their topology to switch data over the other way around the fibre triangle – with the change causing a network blip sufficiently long to drop three heartbeats. In fact, the only approved configuration for the primary/secondary interconnection was a crossover Ethernet cable from one device to the other: the failover code was written with the assumption that, aside perhaps from a highly unlikely sudden patch cord fault, the primary becoming invisible to the secondary meant that the former had died.

Many of us have come across similar instances, where something we expected to fail over has not done so. It’s equally common, too, to come across instances where the failover works OK but then there are issues with the failback, which can be just as problematic. I recall a global WAN I once worked on where, for whatever reason, failovers from primary to secondary were so quick that you didn’t notice any interruption (the only clue was the alert from the monitoring console) but there was a pause of several seconds when failing back.

In the firewall example, even when connectivity was restored the devices would not re-synch without a reboot: remember, the only supported failure scenario was the primary dying completely, which meant that it was only at boot time that it would check to see which role its partner was playing so it could act accordingly. Until someone turned it off and back on again, there was no chance that the problem would go away.

To make our resilient systems truly resilient, then, we need to do three things.

First, we should give some thought to those “black swan” events. It may be that we cannot afford masses of time and effort to consider such low-probability risks, but at the very least we should take a conscious decision on how much or how little we will do in that respect: risk management is all about reasoning and making conscious decisions like that.

Expertise

Second, if we don’t have the knowledge of the precise way our systems and their failover mechanisms work, we must engage people who do and get the benefit of their expertise and experience… and while we’re at it, we should read the manual: nine times out of ten it will tell us how to configure things, even if it doesn’t explain why.

Finally, though, we need to test things – thoroughly and regularly. In our firewall example all potential failure modes should have been considered: if a failure of one of a handful of components could cause an outage, why not test all of them? And when we test, we need to do it for real: we don’t just test failover in the lab and then install the kit in a production cabinet, we test it once it’s in too.

This may need us to persuade the business that we need downtime – or at least potential downtime to cater for the test being unsuccessful – but if management have any sense, they will be persuadable that an approved outage during a predictable time window with the technical team standing by and watching like hawks is far better than an unexpected but entirely foreseeable outage when something breaks for real and the resilience turns out not to work.

Testing

Oh, and when you test failover and failback, run for several days in a failed-over state if you can: many problems don’t manifest instantly, and you will always learn more in a multi-day failover than in one that lasts only a couple of minutes. Bear in mind also the word “regularly” that I used alongside “thoroughly”. Even if we know there has been no change to a particular component, there may well be some knock-on effect from a change to something else. Something that used to be resilient may have become less resilient or even non-resilient because something else changed and we didn’t realise the implication – so regular resilience testing is absolutely key.

Because if something isn’t resilient, this will generally not be because of some esoteric potential failure mode that is next to impossible to anticipate and/or difficult or impossible to test. Most of the time it will because something went wrong – or something was configured wrongly – in a way you could have emulated in a test. ®

Source link

Continue Reading

Technology

Student entrepreneurs score with AI and haptic device

Published

on

A team from Trinity and Queen’s took the top prize at the annual competition for third-level students organised by Enterprise Ireland.

Students who developed a handheld haptic device to help people feel the energy of sports matches have received the top prize at this year’s Student Entrepreneur Awards.

Field of Vision was created by Trinity College Dublin students Tim Farrelly and David Deneher, along with Omar Salem from Queen’s University Belfast.

The device aims to enable people with blindness or visual impairment to better experience sports games. It uses artificial intelligence to analyse live video feeds of games, translating what’s happening on screen to tablet devices through haptic feedback.

Field of Vision was one of 10 finalists in the competition, which is organised annually by Enterprise Ireland. The student team has won a €10,000 prize and will receive mentoring from Enterprise Ireland to develop the commercial viability of the device.

But there were several other winners at the awards ceremony, which took place virtually today (11 June).

Marion Cantillon of University College Cork won a €5,000 high-achieving merit award for her biofilm that eliminates the need for farmers to use plastic or tyres to seal pits and reduces methane emissions.

Dublin City University’s Peter Timlin and University of Limerick’s Richard Grimes also won a high-achieving merit award for their socially responsible clothing brand, Pure Clothing.

Diglot, a language learning book company founded by Trinity College Dublin students Cian Mcnally and Evan Mcgloughlin, took home a €5,000 prize. This company, which has achieved sales in 19 countries to date, weaves foreign words into English sentences in classic novels, allowing the reader to absorb new vocabulary gradually.

Support Silicon Republic

Ivan McPhillips, a lecturer in entrepreneurship, innovation and rural development at GMIT, won the Enterprise Ireland Academic Award.

Along with the prize money, the winners will also share a €30,000 consultancy fund to help them to turn their ideas into a commercial reality. Merit awards were given to the remaining six finalists, along with €1,500 per team.

‘Springboard for tomorrow’s business leaders’

This is the 40th year of Enterprise Ireland’s Student Entrepreneur Awards, a competition that is open to students from all third-level institutions across the country.

The winner of last year’s competition was Mark O’Sullivan of University College Cork, who developed a device to help detect brain injuries in newborns.

Leo Clancy, CEO of Enterprise Ireland, said the competition provides a platform for students to showcase their business ideas and acts as a “springboard for tomorrow’s business leaders”.

“Previous winners and finalists have gone on to achieve success both nationally and internationally,” he added.

“We’ve had over 250 entries for this year’s awards, with applicants demonstrating ingenuity in their approach to solving real-world problems across a range of sectors.”

Tánaiste and Minister for Enterprise, Trade and Employment Leo Varadkar, TD, congratulated the winners. “I’m really impressed by the calibre and ingenuity of the ideas put forward, especially given the significant challenges that came with this unprecedented year,” he said.

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!