I’m playing online Pictionary while chatting with five people I’ve never met. This is not at all how I usually spend my Thursdays. We’ve all dropped into a virtual meeting space on a site called gather.town, which provides free customisable spaces for anyone who wants to organise a get-together without using Zoom. Gather is a virtual world and you choose an avatar before entering it: imagine a mid-80s Super Mario game in which, instead of jumping over his enemies, Mario has to go to the office. There are pixelated potted palms dotted about my screen, a couple of banks of desks and a sofa area, all rendered in that very specific 2D map style common to early computer games. I’m represented by a tiny, blocky avatar: a collection of dots arranged to look a bit like a person. As I move it around with keyboard keys, I can enter and leave conversations – when I do so, a small live video of whoever I’m talking to appears above the main screen.
It might all sound mad, but Gather is 18 months old, has 4 million users, and recently raised $26m in investment. Universities use it to create virtual campuses; individuals use it to host games nights; groups of friends throw parties on it – and workers are collaborating on it. It is trying, like hundreds of other new platforms, sites and apps, to provide us all with a solution to a very 2021 problem: despite being ubiquitous since early 2020, video calls aren’t necessarily helping us work or stay connected effectively.
Recent research from Stanford University provided evidence that the “Zoom fatigue” many of us feel is real. The study showed that the cognitive load of video conferencing is far higher than phone calls or in-person conversation. Where normally we pick up and give out valuable non-verbal cues from body language, they’re missing from video’s flat, sometimes delayed and often blurry images. We find the sustained, but often off-kilter, eye contact inherent in video calls hard to tolerate. When do you ever stare at multiple looming faces, all at once, for an hour, in real life? We find seeing ourselves on screen stressful, too, and being tied to a screen cuts down our mobility (unlike a phone call, during which we can move).
James Bore, a cybersecurity expert who runs Bores Consultancy, hosts this open office for a couple of hours every week, inviting anyone working in his field via Twitter and LinkedIn to drop in to discuss issues or make new contacts (he also has a remote office for his own team, and hosts a “pub” night in a separate room for more informal networking, as well as helping other businesses organise events online through his company ReuniVous). Inviting people to play games such as Pictionary lightens the tone.
Why do Bore and his guests prefer Gather, given that it does also have a video component? “Almost every other video platform is very one way,” he says. “You’ve essentially got someone delivering stuff to a group of people, so you can’t have natural interactivity.”
Have you noticed that if two people try to talk at the same time on most video calls, one voice cuts out? Not on Gather. “People can talk at the same time,” Bore says. “If you move your avatar farther away from someone, their voice will get quieter but you can still catch a bit of the conversation. You can walk up to people, go sit at a table with people, jump into a private chat, play games. You can also walk out of a conversation. It’s more natural.”
Despite attending real-life industry events for years, Bore reckons he’s gained far more useful connections in this open office with its random attendees. While some remote workers mourn spontaneous chats and water-cooler moments, “serendipity actually happens here”, he says. “Almost all of the video-conferencing software requires a reason for the conversation. You can’t just pop in and say, ‘Let’s have a chat’ like you can here.” Gather’s other neat trick is keeping the video component low-key – the videos are ranged across the top of the screen, rather than dominating, which forces you to look at just one person at a time as they speak, rather than everyone at once, just as in face-to-face conversations.
There are hundreds of other sites, platforms and apps vying to become the next Zoom or Microsoft Teams, offering remote workers more than just a gallery of faces on a screen. Some are small, such as the micro-social network phone app Totem, developed to deepen connections within a business and used by companies such as John Lewis as a sort of private Facebook; staff are encouraged to share team successes alongside photos of pets (it also churns out data on engagement and morale). Others are larger, such as Wonder, which provides a simple webpage full of bubbles, each containing a photo of a guest, moving between white circles meant to represent tables on which people can video chat with each other; Wonder raised £11m in seed funding late last year, and counts Deloitte and Harvard as users.
Ninety-seven per cent of training now takes place online and, although 70% of it is done via Microsoft Teams, according to research by HR analysts Fosway, companies including insurers Hiscox and the restaurant chain Leon are using gamified training apps. These can allow staff to be put into situations that would be hard to replicate in real life (or on a video call), while also handing out dopamine-inducing micro-rewards, as stars or points.
But is more screen time what any of us need? It has increased by a third, to an average of 40% of our waking hours, during the pandemic. Rahaf Harfoush, a digital anthropologist, is director of Red Thread Institute of Digital Culture and an adjunct professor at Sciences Po in Paris. “The digitalisation of in-real-life [IRL] experiences is what a lot of companies rushed to do when the pandemic struck,” she says. “Their thinking was: ‘If we did it in person, let’s do it on Zoom.’ Many of these applications don’t make sense and can add to technological fatigue.”
Professor Gary Burnett, from Nottingham University, was keenly aware of this risk when he moved one of his engineering degree modules online last autumn. Rather than defaulting to the better-known platforms, he spent much of last summer trialling different fully virtual worlds to host his classes, before settling on Mozilla Hubs, a 3D-rendered meeting space used by Nasa. As I click a link into “Nottopia”, Burnett – or rather his cartoon-like avatar, a floating, hoodie-wearing, grey-haired head and torso – meets me in the “lobby”, a semi-open air vaulted space, next to a large digitised lake.
He leads me, still floating, into the virtual pavilion where he’s about to hold a product design lesson in creating a driverless taxi. My avatar is a small, red cartoon fox, but I could have chosen from thousands of options, or built my own. I’m also floating; I move by using the arrow keys on my keyboard, changing my gaze with the cursor so that I can look around the large room, which has a mixture of bare brick and white walls, and a pale grey floor. Sunlight seems to pour in through the glass roof, casting natural-looking shadows, and most spaces have a view towards blue sky and realistic clouds. Steps and doorways lead into other spaces – a smaller area with armchairs for more private meetings, and other larger rooms for exhibitions; one huge wall is taken up by a virtual fish tank. There’s no video here – we speak via our avatars, who wobble or move in a human-like way to show who is talking.
This is a virtual world where practically anything is possible, so Burnett can conjure up a 3D taxi that hovers in the centre of the group as they discuss its features. At one point, several students enable flying mode and hover high above the car. To examine another bit of tech, they all pile inside the taxi, laughing. (It’s all the funnier as one student’s avatar is an astronaut, another’s is a parrot, and a third’s seems to be a rainbow-coloured ghost. Burnett says the students often choose avatars that reflect their personalities – the person with the parrot avatar likes ornithology.)
There’s no live video involved, and no PowerPoint or slides, just genuine and playful interaction. When a chart appears on the wall, the students whip out virtual pens and start annotating it, and Burnett has placed 3D objects around the room for them to use as they experiment and discuss. Three-quarters of his students report that Mozilla Hubs has helped them with social isolation, Burnett says. “You can see that in the way I teach – it’s not a one-way flow of information.”
His students like Nottopia so much that they come here, via a link, outside lessons and show their friends around (occasionally leaving behind vast joke 3D models, or virtual replicas of Nottingham’s famous Canada geese). “Joining in as an avatar gives you a veil of anonymity that has made everyone less awkward about speaking up in class,” says Rebekah Kay, who is doing a master’s in mechanical engineering. “In some ways, I feel more present than if I was physically there.”
Hubs and Gather are genuinely fun to use (and currently free). But there is a more corporate side to virtual life, too. The UK’s in-person events and conferencing industry was worth £42.3bn in 2018(£800bn, globally) and, one way or another, the industry wants to get back some of that revenue. “At the beginning of the pandemic, there were probably six platforms for virtual events, and now there are more than 100,” says Vanessa Lovatt, chief evangelist (her real job title) for Glisser, one such platform, which runs events for Facebook, Uber and the NHS. When we speak she is about to rehearse an online event for 47,000 people; they’ve tested the site with an audience of 170,000.
The question is, do virtual conferences work? And do we really need to replicate awkward IRL networking experiences while adding to our digital cognitive load? As with much of the so-called future of work, it’s still early days, both for the tech and, perhaps, for its users. This was painfully evident at the Tory party’s virtual conference last October, which was plagued by technical glitches, and criticised by everyone from attendees who couldn’t log on and speakers who had no audiences to thinktanks and exhibitors who paid for virtual pitches, at least one of whom reportedly requested a refund. At the time, MP Tim Loughton told PoliticsHome: “My first fringe meeting, we had to wait over 10 minutes for the panel to be let in; then we were all cut off and had to be sent a new link, meaning we started again almost half an hour later… [Then] it turned out in the first part we had just been talking to ourselves and there was no audience.”
A slicker attempt at recreating in-person networking has been made by the Virtulab, a British digital technology company that has developed an immersive virtual venue rather like a digitised version of the Edinburgh International Conference Centre. It can be hired in exactly the same way, and already has been by TEDx events and the Instituteof People Management. But as an avatar version of me strolls through the cavernous digital hall on my laptop screen, my non-gamer head is spinning. There are realistic-looking bot people on hand to help me if I get stuck, booths to walk into – just as at a real trade show – staffed by other avatar people who I can speak to in real time (with or without video). There are speed-networking zones and branded video screens on the walls. I can chat with the avatar people I pass and walk around the venue, or teleport between different areas. There are auditoriums where speakers can present to an avatar audience either as their avatar selves or via live video links.
The experience is pretty smooth, if disconcerting – it’s strange not knowing who any of the avatars around me might be (or if they have people attached to them at all – the auditorium auto-populates to fill all the seats, so no one has to give a talk to an empty room). But isn’t one of the great things about being forced to work from home that we no longer have to go to corporate spaces like this? Perhaps I’m a misanthrope, but I like no longer having to visit exhibition centres several times a year. (I write a lot about hospitality and, pre-Covid, often travelled to the ExCeL centre in London’s Docklands to attend expos about things like packaging, food technology or free-from foods.) I can see how this would be great for brands and event organisers, but I’m not totally sold that it’s good for the rest of us.
Dave Cummins, executive director at the Virtulab, disagrees. For him, this isn’t a temporary fix while we wait for the pandemic to blow over. “We see this from an eco perspective, via the reduction in travel – there is a cost in server burn, but it’s nowhere near what you get from an event.”
If a virtual reality conference sounds a bit out there, imagine logging into a virtual reality office every day, from home – another Virtulab offering. If you’re yearning to get back to the office – with its random conversations and predictable routines – this could be your answer. Although subscribers can build any office they want, the immersive version I visited, via my laptop screen, created for two clients, an events company and a petrochemical company, looked exactly like a normal, grey office building. It’s as if they got their best designers to perfectly recreate a business park in Reading.
Unlike conventional remote-work platforms, this one also uses lifelike avatars: mine arrives at the building and walks along a corridor, before opening a door, entering an office and choosing a desk. If I was working here for real, I’d be able to access things like my company’s storage drives, too. “The idea is that you come into the platform, open up your browser and start using it just like this is your office,” Cummins says. “If you’re not in a meeting, you can open the door so avatars can just walk in. We’re trying to empower that water-cooler moment. If you would come and see me at 10 o’clock in the morning in real life, then you would come and see me here.”
Businesses such as Green Building Council SA, an association for green companies, and AI Laith Dubai, an events company, are early adopters of the Virtulab. (Other organisations are working on VR offices: Facebook is developing a remote office requiring a VR headset, slated to launch later this year.) For me, the best part is that it recreates access to colleagues: as long as they’re logged on and available, you can talk – as the avatar, and with your voice rather than video – whenever you fancy, with no need to create a link or calendar invite. There could be downsides, though. A virtual office can create the expectation that you will be digitally present for a traditional eight-hour day, robbing homeworkers of the flexibility they have enjoyed in recent months. Remote-work tools and platforms could easily shade into digitally surveilling employees, even if only in terms of tracking how long you are at your computer. (As well as raising multiple privacy issues, this can be detrimental to engagement and retention: a 2017 study showed that monitoring makes employees feel their organisation is unethical.) “One of the best ways for a business to create an insider threat – people who will attack your company from within, whether maliciously or through negligence – is failing to trust your staff,” Bore tells me. “When people feel constrained, they will find ways around it. When they feel trusted and accountable for what they’re doing, you prevent insider threats – not by saying you must be at your online desk from nine to five.”
As many as one in five businesses already use surveillance software to monitor staff as they work from home, including French company Teleperformance, which employs 380,000 people in 34 countries. In March, it launched a webcam security system called TP Observer, which uses an AI system with the ability to watch home-working call centre staff, or to track unauthorised phone usage or “unknown persons” appearing at the desk, and to send screenshots to supervisors. The company insists that webcams for UK staff would be voluntary, and would be used only for meetings and training, or pre-scheduled desk checks, and would not be used for random surveillance, but adds that levels of scrutiny will vary in other countries.
Of course, you don’t necessarily need new tech to watch your staff – Microsoft Teams, for instance, logs screen minutes, number of calls, chats or meetings, collating them into a handy graph for managers.
The Virtulab doesn’t expect its remote-office platform to be used to track staff attendance (although that’s up to the end user). But it does want to keep you in its virtual world. “We’re looking at gamification,” Cummins says. “During your lunch break, you grab a sandwich and come back to your desk, and race cars, or play golf, or do an escape room. It’s a chance to team-build, and get away from the monotony.” He says there are also art galleries and gardens to amble around, though I think I’d rather spend my lunch break in an actual park. Could this increase employees’ screen time? “We do our own health and safety assessments as a company – seating positions, desks, chairs and so on. For remote work, it’s an employer who would be taking this package, so it’s their responsibility to ensure screen time is being monitored and assessed.”
These platforms are meant to improve remote work, but is a virtual experience that fills the entire day better or worse than spending a couple of hours on video calls but being otherwise generally invisible? “Employers probably want to help people gel, but they risk trying to do too much,” says Dr Linda Kaye, who studies the psychology of gaming and online behaviour. “I’m notsaying it’s not useful in a work context, but when you force it on people it becomes inauthentic.” Her research reflects the fact that valuable social connections can be forged online. But just because we can create virtual worlds to work in, should we?
Ellie Gibson, a games journalist and host of the Extra Life gaming podcast, is enthusiastic about avatar games where she gets to create an alter ego. “I play as a 7ft tall Viking called Avril who is nothing like me. I wouldn’t want to be myself, a 43-year-old woman from Catford.” She worries about the implications of coming up with an avatar version of yourself in a working environment – where presumably the expectation is that you try to represent yourself realistically. “For people who have issues with body image, I can imagine this being anxiety-inducing. If you’re a larger person thinking: ‘I’m going to a meeting, and I’m supposed to create this avatar of myself. How am I supposed to do that?’ Would there be a temptation to make yourself look fatter, to be the first to make a joke of it?”
“That’s why the avatar isn’t a 360 capture of your body,” Cummins says. “It can look like you or someone else. If you’ve got a harsh workplace, it could be an issue.”
Much depends on the type of workplace you’re in – its culture and the sector in which it operates. While Hubs, the platform used by the engineers at the University of Nottingham, could work brilliantly for design, technology or architectural businesses, I’m not sure I can see social workers holding a case conference in a virtual world. Would it feel appropriate for a legal firm dealing with serious crimes to hold their meetings as avatar versions of themselves on Gather? Similarly, it’s hard to imagine holding a disciplinary session as a cartoon version of yourself. For some teams and clients, working in a virtual office could feel even more torturous than video calling already is.
Aspects of the online work boom will inevitably disappear as pandemic restrictions ease and we are able to pick and choose rather than being forced online. It could be that platforms with fewer frills prove more enduring. One online space that has exploded since launching in spring 2020 is the invite-only social audio app Clubhouse, which already claims to have 10 million users. Social audio is exactly the same as social media – you follow individuals and join groups – but with live speech rather than text or images. Clubhouse is a simple platform where users create “rooms”, to have real-time, audio-only conversations about anything they want; Twitter is close behind with its new creation, Spaces.
Michael Liskin is an LA-based virtual facilitation expert who has worked as a beta tester for social audio apps. “The next big thing isn’t as fancy as we might think,” he tells me. “There is potential for social audio to provide a kind of middle ground, one between fatiguing video conferencing and text-based interaction like Slack, which can be labour intensive and not as intimate.” Rather than using virtual-world platforms, he is helping teams connect using Clubhouse. “There will soon be a bunch of social audio apps optimised for happy hours, workshops, team-building, book clubs, mentorship and much more. Social audio fosters intimacy.” And, as Liskin points out, because it’s audio rather than video, “it can be in your pocket while you’re out on a bike”. There are even rooms on Clubhouse where people meet to work, mainly in silence, collectively but remotely.
Back on gather.town, we’ve moved on to the pub, much as you might at the end of a traditional working day, and a tiny snowman avatar is playing – really – the pub’s piano. Bore has no intention of calling time on his pub once the real ones reopen. “It has put me in contact with people in my field who I would never have been able to reach otherwise, people from all over the world,” he says. It’s still going to sound mad to most people though, isn’t it? “It’s almost impossible to explain unless you’re doing it,” he laughs. “The moment you’re in here, it immediately makes sense.”
David Eagleman, 50, is an American neuroscientist, bestselling author and presenter of the BBC series The Brain, as well as co-founder and chief executive officer of Neosensory, which develops devices for sensory substitution. His area of speciality is brain plasticity, and that is the subject of his new book, Livewired, which examines how experience refashions the brain, and shows that it is a much more adaptable organ than previously thought.
For the past half-century or more the brain has been spoken of in terms of a computer. What are the biggest flaws with that particular model? It’s a very seductive comparison. But in fact, what we’re looking at is three pounds of material in our skulls that is essentially a very alien kind of material to us. It doesn’t write down memories, the way we think of a computer doing it. And it is capable of figuring out its own culture and identity and making leaps into the unknown. I’m here in Silicon Valley. Everything we talk about is hardware and software. But what’s happening in the brain is what I call livewire, where you have 86bn neurons, each with 10,000 connections, and they are constantly reconfiguring every second of your life. Even by the time you get to the end of this paragraph, you’ll be a slightly different person than you were at the beginning.
In what way does the working of the brain resemble drug dealers in Albuquerque? It’s that the brain can accomplish remarkable things without any top-down control. If a child has half their brain removed in surgery, the functions of the brain will rewire themselves on to the remaining real estate. And so I use this example of drug dealers to point out that if suddenly in Albuquerque, where I happened to grow up, there was a terrific earthquake, and half the territory was lost, the drug dealers would rearrange themselves to control the remaining territory. It’s because each one has competition with his neighbours and they fight over whatever territory exists, as opposed to a top-down council meeting where the territory is distributed. And that’s really the way to understand the brain. It’s made up of billions of neurons, each of which is competing for its own territory.
You use this colonial image a lot in the book, a sense of the processes and struggles of evolution being fought out within the brain itself. That’s exactly right. And I think this is a point of view that’s not common in neuroscience. Usually, when we look in a neuroscience textbook, we say here are the areas of the brain and everything looks like it’s getting along just fine. It belongs exactly where it is. But the argument I make in the book is, the only reason it looks that way is because the springs are all wound tight. And the competition for each neuron – each cell in the brain to stay alive against its neighbours – is a constantly waged war. This is why when something changes in the brain, for example, if a person goes blind, or loses an arm or something, you see these massive rearrangements that happen very rapidly in the brain. It’s just as the French lost their territory in North America because the British were sending more people over.
One of the great mysteries of the brain is the purpose of dreams. And you propose a kind of defensive theory about how the brain responds to darkness. One of the big surprises of neuroscience was to understand how rapidly these takeovers can happen. If you blindfold somebody for an hour, you can start to see changes where touch and hearing will start taking over the visual parts of the brain. So what I realised is, because the planet rotates into darkness, the visual system alone is at a disadvantage, which is to say, you can still smell and hear and touch and taste in the dark, but you can’t see any more. I realised this puts the visual system in danger of getting taken over every night. And dreams are the brain’s way of defending that territory. About every 90 minutes a great deal of random activity is smashed into the visual system. And because that’s our visual system, we experience it as a dream, we experience it visually. Evolutionarily, this is our way of defending ourselves against visual system takeover when the planet moves into darkness.
Another mystery is consciousness. Do you think we are close to understanding what consciousness is and how it’s created? There’s a great deal of debate about how to define consciousness, but we are essentially talking about the thing that flickers to life when you wake up in the morning. But as far as understanding why it happens, I don’t know that we’re much closer than we’ve ever been. It’s different from other scientific conundrums in that what we’re asking is, how do you take physical pieces and parts and translate that into private, subjective experience, like the redness of red, or the pain of pain or the smell of cinnamon? And so not only do we not have a theory, but we don’t really know what such a theory would look like that would explain our experience in physical or mathematical terms.
You predict that in the future we’ll be able to glean the details of a person’s life from their brains. What would that mean in terms of personal privacy and liberty? Oh, yeah, it’s going to be a brave new world. Maybe in 100 years, maybe 500, but it’ll certainly happen. Because what we’re looking at is a physical system that gets changed and adjusted based on your experiences. What’s going on with the brain is the most complex system we’ve ever come across in our universe but fundamentally it’s physical pieces and parts and, as our computational capacities are becoming so extraordinary now, it’s just a countdown until we get there. Do we get to keep our inner thoughts private? Almost certainly we will. You can’t stick somebody in a scanner and try to ask them particular kinds of questions. But again, this will happen after our lifetime, so it’s something for the next generations to struggle with.
Do you think in the future that we’ll be able to communicate just by thinking? Communication is a multi-step process. And so in answering your questions, I have many, many thoughts. And I’m getting it down to something that I can say that will communicate clearly what I intend. But if you were to just read my thoughts and say, “OK, give me the answer,” it would be a jumble of half-sentences and words and some random thought, like, Oh, my coffee is spilling. It’s like you wouldn’t want to read somebody’s book that hasn’t been polished by them over many iterations, but instead is burped out of their brain.
What are your views on Elon Musk’s Neuralink enterprise, which is developing implantable brain-machine interfaces? There’s nothing new about it insofar as neuroscientists have been putting electrodes in people’s brains for at least 60 years now. The advance is in his technology, which is making the electrodes denser and also wireless, although even that part’s not new. I think it will be very useful in certain disease states, for example, epilepsy and depression, to be able to put electrodes directly in there and monitor and put activity in. But the mythology of Neuralink is that this is something we can all use to interface faster with our cellphones. I’d certainly like to text 50% faster, but am I going to get an open-head surgery? No, because there’s an expression in neurosurgery: when the air hits your brain, it’s never the same.
You didn’t start out academically in neuroscience. What led you there? I majored in British and American literature. And that was my first love. But I got hooked on neuroscience because I took a number of philosophy courses. I found that we’d constantly get stuck in some philosophical conundrum. We’d spin ourselves into a quagmire and not be able to get out. And I thought, Wow, if we could understand the perceptual machinery by which we view the world, maybe we’d have a shot at answering some of these questions and actually making progress. When I finally discovered neuroscience, I read every book in the college library on the brain – there weren’t that many at the time – and I just never looked back.
How can we maximise our brain power, and what do you do to switch off? There’s this myth that we only use 10% of our brain that, of course, is not true. We’re using 100% of our brain all the time. But the way information can be digested and fed to the brain can be very different. I think the next generation is going to be much smarter than we are. I have two small kids, and any time they want to know something, they ask Alexa or Google Home, and they get the answer right in the context of their curiosity. This is a big deal, because the brain is most flexible when it is curious about something and gets the answer. Regarding switching off, I never take any downtime and I don’t want to. I have a very clear sense of time pressure to do the next things. I hope I don’t die young, but I certainly act as though that is a possibility. One always has to be prepared to say goodbye, so I’m just trying to get everything done before that time.
Feature When designing systems that our businesses will rely on, we do so with resilience in mind.
Twenty-five years ago, technologies like RAID and server mirroring were novel and, in some ways, non-trivial to implement; today this is no longer the case and it is a reflex action to procure multiple servers, LAN switches, firewalls, and the like to build resilient systems.
This does not, of course, guarantee us 100 per cent uptime. The law of Mr Murphy applies from time to time: if your primary firewall suffers a hardware failure, there is a tiny, but non-zero, chance that the secondary will also collapse before you finish replacing the primary.
If you have a power failure, there is a similarly micro-tangible likelihood that the generator you have tested weekly for years will choose this moment to cough stubbornly rather than roaring into life. Unless you are (or, more accurately, the nature of your business is) so risk-averse that you can justify spending on more levels of resilience to reduce the chance of an outage even further (but never, of course, to nothing).
There are occasions, though, where planning for failure becomes hard.
Let us look at a recent example. In July 2020, the main telco in Jersey had a major outage because of a problem with a device providing time service to the organisation’s network. The kicker in this event was that the failed device did not fail in the way we are all used to – by making a “bang” noise and emitting smoke; had it done so, in fact, all would have been well as the secondary unit would have taken over.
No, this was a more devious kind of time server which only part-failed. It kept running but started serving times from about 20 years in the past (by no coincidence at all this was the factory default time setting), thus confusing network infrastructure devices and causing traffic to stop flowing.
Customer dissatisfaction was palpable, of course, but as an IT specialist one does have to feel something for the company’s technical team: how many of us would ever consider, as a possible failure case, something that the technical chief described quite correctly as a “sequence of events that was almost impossible to foresee”?
(Incidentally, in a somewhat more good-news story, stepping back a moment to our point about extra layers of resilience, the same company had previously survived three offshore cables being severed… by having a fourth).
Could monitoring tools have been put in place to see issues like this when they happen? Yes, absolutely, but the point is that to do so one would first need to identify the scenarios as something that could happen. In the sense of risk management, this type of failure – very high impact but infinitesimally unlikely – is the worst possible kind for a risk manager. There are theories and books about how one can contemplate and deal with such risks, the best-known of which is probably Nassim Nicholas Taleb’s book The Black Swan, which talks of just this kind of risk, but if you want to try to defend against the unexpected then at the very least you need to sit down with a significant number of people in a highly focused way, preferably with an expert in the field to guide and moderate, and work on identifying such possible “black swan” events.
While the black swan concept is most definitely a thing to bear in mind, there is in fact a far more common problem with systems that we consider resilient – a failure to understand how the resilience works.
One particular installation at a company with an office and two data centres had point-to-point links in a triangle between each premises, and each data centre had an internet connection. The two firewalls, one in each data centre, were configured as a resilient pair, and worked as such for years. One day internet service went down, and investigation showed that the secondary unit had lost track of the primary and had switched itself to become the primary. Having two active primaries caused split traffic flows, and hence an outage.
In hindsight, this was completely predictable. The way the primary/secondary relationship was maintained between the devices was for the primary to send a “heartbeat” signal to the secondary every few seconds; if the secondary failed to receive the heartbeat three times, it woke up and acted as a primary. Because the devices were in separate data centres, they were connected through various pieces of technology: a LAN patch cord into a switch, into a fibre transceiver, into a telco fibre, then the same in reverse at the other end.
A fault on any one of those elements could cause the network devices to reconfigure their topology to switch data over the other way around the fibre triangle – with the change causing a network blip sufficiently long to drop three heartbeats. In fact, the only approved configuration for the primary/secondary interconnection was a crossover Ethernet cable from one device to the other: the failover code was written with the assumption that, aside perhaps from a highly unlikely sudden patch cord fault, the primary becoming invisible to the secondary meant that the former had died.
Many of us have come across similar instances, where something we expected to fail over has not done so. It’s equally common, too, to come across instances where the failover works OK but then there are issues with the failback, which can be just as problematic. I recall a global WAN I once worked on where, for whatever reason, failovers from primary to secondary were so quick that you didn’t notice any interruption (the only clue was the alert from the monitoring console) but there was a pause of several seconds when failing back.
In the firewall example, even when connectivity was restored the devices would not re-synch without a reboot: remember, the only supported failure scenario was the primary dying completely, which meant that it was only at boot time that it would check to see which role its partner was playing so it could act accordingly. Until someone turned it off and back on again, there was no chance that the problem would go away.
To make our resilient systems truly resilient, then, we need to do three things.
First, we should give some thought to those “black swan” events. It may be that we cannot afford masses of time and effort to consider such low-probability risks, but at the very least we should take a conscious decision on how much or how little we will do in that respect: risk management is all about reasoning and making conscious decisions like that.
Second, if we don’t have the knowledge of the precise way our systems and their failover mechanisms work, we must engage people who do and get the benefit of their expertise and experience… and while we’re at it, we should read the manual: nine times out of ten it will tell us how to configure things, even if it doesn’t explain why.
Finally, though, we need to test things – thoroughly and regularly. In our firewall example all potential failure modes should have been considered: if a failure of one of a handful of components could cause an outage, why not test all of them? And when we test, we need to do it for real: we don’t just test failover in the lab and then install the kit in a production cabinet, we test it once it’s in too.
This may need us to persuade the business that we need downtime – or at least potential downtime to cater for the test being unsuccessful – but if management have any sense, they will be persuadable that an approved outage during a predictable time window with the technical team standing by and watching like hawks is far better than an unexpected but entirely foreseeable outage when something breaks for real and the resilience turns out not to work.
Oh, and when you test failover and failback, run for several days in a failed-over state if you can: many problems don’t manifest instantly, and you will always learn more in a multi-day failover than in one that lasts only a couple of minutes. Bear in mind also the word “regularly” that I used alongside “thoroughly”. Even if we know there has been no change to a particular component, there may well be some knock-on effect from a change to something else. Something that used to be resilient may have become less resilient or even non-resilient because something else changed and we didn’t realise the implication – so regular resilience testing is absolutely key.
Because if something isn’t resilient, this will generally not be because of some esoteric potential failure mode that is next to impossible to anticipate and/or difficult or impossible to test. Most of the time it will because something went wrong – or something was configured wrongly – in a way you could have emulated in a test. ®
A team from Trinity and Queen’s took the top prize at the annual competition for third-level students organised by Enterprise Ireland.
Students who developed a handheld haptic device to help people feel the energy of sports matches have received the top prize at this year’s Student Entrepreneur Awards.
Field of Vision was created by Trinity College Dublin students Tim Farrelly and David Deneher, along with Omar Salem from Queen’s University Belfast.
The device aims to enable people with blindness or visual impairment to better experience sports games. It uses artificial intelligence to analyse live video feeds of games, translating what’s happening on screen to tablet devices through haptic feedback.
Field of Vision was one of 10 finalists in the competition, which is organised annually by Enterprise Ireland. The student team has won a €10,000 prize and will receive mentoring from Enterprise Ireland to develop the commercial viability of the device.
But there were several other winners at the awards ceremony, which took place virtually today (11 June).
Marion Cantillon of University College Cork won a €5,000 high-achieving merit award for her biofilm that eliminates the need for farmers to use plastic or tyres to seal pits and reduces methane emissions.
Dublin City University’s Peter Timlin and University of Limerick’s Richard Grimes also won a high-achieving merit award for their socially responsible clothing brand, Pure Clothing.
Diglot, a language learning book company founded by Trinity College Dublin students Cian Mcnally and Evan Mcgloughlin, took home a €5,000 prize. This company, which has achieved sales in 19 countries to date, weaves foreign words into English sentences in classic novels, allowing the reader to absorb new vocabulary gradually.
Ivan McPhillips, a lecturer in entrepreneurship, innovation and rural development at GMIT, won the Enterprise Ireland Academic Award.
Along with the prize money, the winners will also share a €30,000 consultancy fund to help them to turn their ideas into a commercial reality. Merit awards were given to the remaining six finalists, along with €1,500 per team.
‘Springboard for tomorrow’s business leaders’
This is the 40th year of Enterprise Ireland’s Student Entrepreneur Awards, a competition that is open to students from all third-level institutions across the country.
The winner of last year’s competition was Mark O’Sullivan of University College Cork, who developed a device to help detect brain injuries in newborns.
Leo Clancy, CEO of Enterprise Ireland, said the competition provides a platform for students to showcase their business ideas and acts as a “springboard for tomorrow’s business leaders”.
“Previous winners and finalists have gone on to achieve success both nationally and internationally,” he added.
“We’ve had over 250 entries for this year’s awards, with applicants demonstrating ingenuity in their approach to solving real-world problems across a range of sectors.”
Tánaiste and Minister for Enterprise, Trade and Employment Leo Varadkar, TD, congratulated the winners. “I’m really impressed by the calibre and ingenuity of the ideas put forward, especially given the significant challenges that came with this unprecedented year,” he said.