Connect with us

Technology

Always take the weather with you: 100 years of forecasting broadcasts | Television & radio

Voice Of EU

Published

on

Exactly 100 years ago today, at 10.05am on 26 April 1921, an unassuming cleric and academic, Rev William F Robison, the president of St Louis University, made history as the first person in the world to broadcast a weather report. He was launching the university’s own radio station, WEW, and followed some opening remarks with a 500-word meteorological bulletin.

Weather forecasting in Britain actually began 60 years before, when the Meteorological Office, a department within the Board of Trade founded to predict storms and limit loss of life at sea, began to supply the Times with weather reports in 1861. The shipping forecast was launched in 1867, when information about marine conditions was telegraphed to ports and harbours all round the UK coast.

When most of us think of the weather forecast, though, we tend to think of television and a presenter standing in front of a weather map. The first TV forecasts, on the BBC in 1936, featured rudimentary hand-drawn maps, with an off-screen narration by someone almost certainly wearing black tie.

It wasn’t until 1954 that the weather was given a face – that of George Cowling, who stood in front of the (still hand-drawn) BBC weather map and gave his predictions. Cowling, a man unaccustomed to the limelight, was more interested in the weather than being on TV, and joined the RAF as a military meteorologist in 1957.

Carol Kirkwood, joint favourite weather presenter with Tomasz Schafernaker in a Radio Times poll.
Carol Kirkwood, joint favourite weather presenter with Tomasz Schafernaker in a Radio Times poll. Photograph: Jeff Spicer/Getty Images

Over the years, very few weather presenters have been employed by the BBC. Most have been with the Met Office. This has not always been the case with other broadcasters. The little-lamented tabloid channel L!VE TV, for example, was less interested in meteorological credibility, hence its decision to broadcast the weather in Norwegian. This may have been a tribute to Vilhelm Bjerknes (1862-1951), a physicist and one of the founding fathers of meteorology, but it’s just possible that it had more to do with the young, blond, female presenters. Even today, it doesn’t take long online to find endless pages devoted to ranking the world’s hottest weather presenters. Will the drizzle be any less drizzly if we’re told about it by someone in tight clothes?

It all seems a far cry from the homely charms of Michael Fish, a man so prominent in the national consciousness that one year he won the titles both of Britain’s best dressed man and Britain’s worst dressed man. Today, Fish is rather cruelly remembered as the man who failed to predict the great storm of 1987, a claim he strenuously denies.

Old school, Michael Fish in his heyday.
Old school, Michael Fish in his heyday. Photograph: REX/Shutterstock

Each generation has its memorable weather presenters. For me, the forecast will always be Fish, or the breathlessly enthusiastic Ian McCaskill, armed with their magnetic symbols that, with luck, would stick to the spot on the map where they put them. Today’s favourites, according to a recent Radio Times poll, are Carol Kirkwood and the famous finger of forecasting himself, Tomasz Schafernaker. Although, judging by recent events, perhaps Alex Beresford might be in with a shout now.

Just as the forecasters change, so too does the weather backdrop. Gone are the magnetic clouds that replaced the old hand-drawn weather maps. Now digital technology has given us satellite images and CGI.

In a 2017 podcast, weather forecaster Peter Gibb recalled how predicting conditions made a big leap forward in the 1980s thanks to a new supercomputer. This processing behemoth had roughly a third of the power of a modern smartphone. Today, the Met Office uses the Cray XC40, one of the most powerful computers in meteorology, capable of performing 14,000 trillion arithmetic operations every second. Even that, though, is shortly to be rendered obsolete. Last week, the Met Office announced that it is to partner with Microsoft to build the most powerful weather computer in the world, twice as powerful as any other computer in the UK.

New wave… Tomasz Schafernaker, favourite of Radio Times readers, along with Carol Kirkwood.
New wave… Tomasz Schafernaker, favourite of Radio Times readers, along with Carol Kirkwood. Photograph: BBC Weathher

Even without this new processing titan, modern forecasts are so precise, they even factor in variables such as soil type and whether the leaves are on the trees. The result of all this gadgetry is more accurate forecasts than ever before. Now, four-day forecasts are as accurate as one-day forecasts were 30 years ago. That said, long-term forecasting is still a fool’s game. Just look at the plethora of “Three months of blizzards” headlines certain newspapers churn out on quiet days, based on the sensationalist hypothesis of a fantasist with a ZX Spectrum.

Paradoxically, though, greater accuracy might spell the end for weather broadcasts. The ability to get a prediction not just for your region, but specifically for your city, town or even your village, is an extraordinary leap forward. But it’s not one that you’re likely to benefit from on a national broadcast. If I lived in Mayfair (a guy can dream, right?) Schafernaker might be able to tell me what the weather will be like in London and the south-east, but any number of apps will tell me what will happen in Mayfair every hour for the next few days. This kind of renders the weather broadcasts defunct.

In short, then, 100 years after the world’s first broadcast weather report, the outlook for weather forecasts on TV and radio could best be described as distinctly unsettled.

Source link

Technology

The telehealth revolution is here to stay – and here’s what’s coming next

Voice Of EU

Published

on

Webdoctor CEO David Crimmins offers up his insights into the growth of telehealth in Ireland and worldwide.

The pandemic has resulted in an unprecedented shift to healthcare being delivered outside of the traditional clinical settings. While businesses and industries in marketplaces across the world were forced to pivot their services or close their doors for a period of time over the last two years, the pandemic created an opportunity for the telehealth sector as patient demand for virtual healthcare soared rapidly.

Digital health offerings are not new services per se. In fact, Webdoctor was established in 2013. And whilst telemedicine was already on the rise before Covid-19, the pandemic put a magnified spotlight on the sector.

Future Human

Recent reports show the global market is projected to grow to $185.6bn by 2026, with 83pc of patients saying they expect to use telemedicine post pandemic. We’ve already seen an indication of this in the Irish marketplace with the demand for Webdoctor consultations up 226pc in 2021 compared to 2019 – the last full year before the pandemic.

This trend is backed up by another recent report, which surveyed hundreds of clinicians around the world. More than half (56pc) of doctors surveyed predicted that they will make most of their clinical decisions using artificial intelligence tools within the next 10 years.

Global trends

With the telehealth space evolving at a rapid pace both domestically and internationally, digital healthcare platforms and technologies are fast becoming much more than just a convenient alternative.

Mirroring global trends in the telehealth sector, results from the latest National Health Watch report conducted by Webdoctor illustrate that while the demand for online GP services may have increased out of necessity due to Covid 19, it is now the preferred service option for the majority.

For example, given the choice, 60pc of people would prefer to use an online GP or prescription service instead of going to an in-person consultation for general health concerns. This figure rises when it comes to specific concerns such as erectile dysfunction (85pc), hair loss (70pc) or sexual health checks (77pc).

This demand, combined with lengthy waiting times for physical in-person GP appointments, is driving mass growth for online GP and prescription services like Webdoctor and other health-tech platforms.

Telemedicine also offers employers a real opportunity to implement digital healthcare offerings as part of their employee benefits strategies. A recent study from Mercer revealed that 68pc of employers globally expect to increase their investment in digital health and wellbeing, while 40pc of employees say they would be more likely to stay with a company that offers digital health services. By looking after the wellbeing of your workforce through these benefits, you are contributing to the overall long-term success of your business.

In addition, employers in traditional healthcare businesses such as a GP practice or pharmacy, should seize the opportunity to expand and implement new telemedicine technology where possible. The sector is constantly evolving and by using digital tools to complement traditional care, it offers the opportunity to broaden their current offering, improve patient care and potentially increase profits.

Remote monitoring with wearables

So, given the swift pace of progress within the sector, what innovations are coming down the track?

Wearable technology has become a regular part of our everyday lives and is significantly changing how we collect and analyse health-related data. These devices range from smartwatches to virtual at-home health monitors such as Pulsewave, a modern alternative to the traditional arm cuff to measure blood pressure.

A key benefit of wearable sensors is that by providing real time data and enabling people to track their progress, they are encouraging patients to take a more active role in their health. This is something everyone could gain from.

As more digital healthcare platforms incorporate remote patient monitoring utilising wearable technology, it could lead to a more diverse range of results which would help create more accurate diagnoses that ultimately would result in better patient treatment and outcomes.

Increased patient autonomy

Digital healthcare platforms can give patients direct instant access to their medical records or provide them with self-tracking devices. This gives people the opportunity to take control of their health.

As the sector continues to evolve, patient autonomy is likely to continue to increase. While this is a positive outcome for patients, it will be important not to lose the personal interaction and relationship side of traditional medicine as it progresses.

Effective, integrated telehealth services are more than just GPs behind a computer screen. They essentially act as a virtual gateway to the healthcare system, providing easily accessible, affordable medical advice and a positive patient experience, which ultimately improves the patient and GP relationship.

At Webdoctor, our mantra is to “allow clinicians operate at the top of their licence” by reducing unnecessary administrative processes and freeing up their time to focus on patient outcomes. The future of this sector will see hybrid models emerge and the key to achieving success going forward for all health-tech platforms and medical practices alike will be to recognise this and integrate telemedicine into their patient’s care and journey.

What’s also evident is that there is much more growth and development still to come for the telehealth sector. We will see the continued integration of telemedicine and online GP services into everyday life.

Health professionals are excited to explore what the post-pandemic future of telehealth looks like and patients will ultimately benefit. Telehealth, with its flexibility, innovation and convenience, is most definitely here to stay.

By David Crimmins

David Crimmins is the CEO of Webdoctor, a telehealth service that has carried out over 100,000 patient consultations in Ireland.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Technology

‘A catastrophic failure’: computer scientist Hany Farid on why violent videos circulate on the internet | Social media

Voice Of EU

Published

on

In the aftermath of yet another racially motivated shooting that was live-streamed on social media, tech companies are facing fresh questions about their ability to effectively moderate their platforms.

PaytonGendron, the 18-year-old gunman who killed 10 people in a largely Black neighborhood in Buffalo, New York, on Saturday, broadcasted his violent rampage on the video-game streaming service Twitch. Twitch says it took down the video stream in mere minutes, but it was still enough time for people to create edited copies of the video and share it on other platforms including Streamable, Facebook and Twitter.

So how do tech companies work to flag and take down videos of violence that have been altered and spread on other platforms in different forms – forms that may be unrecognizable from the original video in the eyes of automated systems?

On its face, the problem appears complicated. But according to Hany Farid, a professor of computer science at UC Berkeley, there is a tech solution to this uniquely tech problem. Tech companies just aren’t financially motivated to invest resources into developing it.

Farid’s work includes research into robust hashing, a tool that creates a fingerprint for videos that allows platforms to find them and their copies as soon as they are uploaded. The Guardian spoke with Farid about the wider problem of barring unwanted content from online platforms, and whether tech companies are doing enough to fix the problem.

This interview has been edited for length and clarity. Twitch, Facebook and YouTube did not immediately respond to a request for comment.

Twitch says that it took the Buffalo shooter’s video down within minutes, but edited versions of the video still proliferated, not just on Twitch but on many other platforms. How do you stop the spread of an edited video on multiple platforms? Is there a solution?

It’s not as hard a problem as the technology sector will have you believe. There’s two things at play here. One is the live video, how quickly could and should that have been found and how we limit distribution of that material.

The core technology to stop redistribution is called “hashing” or “robust hashing” or “perceptual hashing”. The basic idea is quite simple: you have a piece of content that is not allowed on your service either because it violated terms of service, it’s illegal or for whatever reason, you reach into that content, and extract a digital signature, or a hash as it’s called.

This hash has some important properties. The first one is that it’s distinct. If I give you two different images or two different videos, they should have different signatures, a lot like human DNA. That’s actually pretty easy to do. We’ve been able to do this for a long time. The second part is that the signature should be stable even if the content is being modified, when somebody changes say the size or the color or adds text. The last thing is you should be able to extract and compare signatures very quickly.

So if we had a technology that satisfied all of those criteria, Twitch would say, we’ve identified a terror attack that’s being live-streamed. We’re going to grab that video. We’re going to extract the hash and we are going to share it with the industry. And then every time a video is uploaded with the hash, the signature is compared against this database, which is being updated almost instantaneously. And then you stop the redistribution.

How do tech companies respond right now and why isn’t it sufficient?

It’s a problem of collaboration across the industry and it’s a problem of the underlying technology. And if this was the first time it happened, I’d understand. But this is not, this is not the 10th time. It’s not the 20th time. I want to emphasize: no technology’s going to be perfect. It’s battling an inherently adversarial system. But this is not a few things slipping through the cracks. Your main artery is bursting. Blood is gushing out a few liters a second. This is not a small problem. This is a complete catastrophic failure to contain this material. And in my opinion, as it was with New Zealand and as it was the one before then, it is inexcusable from a technological standpoint.

But the companies are not motivated to fix the problem. And we should stop pretending that these are companies that give a shit about anything other than making money.

Talk me through the existing issues with the tech that they are using. Why isn’t it sufficient?

I don’t know all the tech that’s being used. But the problem is the resilience to modification. We know that our adversary – the people who want this stuff online – are making modifications to the video. They’ve been doing this with copyright infringement for decades now. People modify the video to try to bypass these hashing algorithms. So [the companies’] hashing is just not resilient enough. They haven’t learned what the adversary is doing and adapted to that. And that is something they could do, by the way. It’s what virus filters do. It’s what malware filters do. [The] technology has to constantly be updated to new threat vectors. And the tech companies are simply not doing that.

Why haven’t companies implemented better tech?

Because they’re not investing in technology that is sufficiently resilient. This is that second criterion that I described. It’s easy to have a crappy hashing algorithm that sort of works. But if somebody is clever enough, they’ll be able to work around it.

When you go on to YouTube and you click on a video and it says, sorry, this has been taken down because of copyright infringement, that’s a hashing technology. It’s called content ID. And YouTube has had this technology forever because in the US, we passed the DMCA, the Digital Millennium Copyright Act that says you can’t host copyright material. And so the company has gotten really good at taking it down. For you to still see copyright material, it has to be really radically edited.

So the fact that not a small number of modifications passed through is simply because the technology’s not good enough. And here’s the thing: these are now trillion-dollar companies we are talking about collectively. How is it that their hashing technology is so bad?

These are the same companies, by the way, that know just about everything about everybody. They’re trying to have it both ways. They turn to advertisers and tell them how sophisticated their data analytics are so that they’ll pay them to deliver ads. But then when it comes to us asking them, why is this stuff on your platform still? They’re like, well, this is a really hard problem.

The Facebook files showed us that companies like Facebook profit from getting people to go down rabbit holes. But a violent video spreading on your platform is not good for business. Why isn’t that enough of a financial motivation for these companies to do better?

I would argue that it comes down to a simple financial calculation that developing technology that is this effective takes money and it takes effort. And the motivation is not going to come from a principled position. This is the one thing we should understand about Silicon Valley. They’re like every other industry. They are doing a calculation. What’s the cost of fixing it? What’s the cost of not fixing it? And it turns out that the cost of not fixing is less. And so they don’t fix it.

Why is it that you think the pressure on companies to respond to and fix this issue doesn’t last?

We move on. They get bad press for a couple of days, they get slapped around in the press and people are angry and then we move on. If there was a hundred-billion-dollar lawsuit, I think that would get their attention. But the companies have phenomenal protection from the misuse and the harm from their platforms. They have that protection here. In other parts of the world, authorities are slowly chipping away at it. The EU announced the Digital Services Act that will put a duty of care [standard on tech companies]. That will start saying, if you do not start reining in the most horrific abuses on your platform, we are going to fine you billions and billions of dollars.

[The DSA] would put pretty severe penalties for companies, up to 6% of global profits, for failure to abide by the legislation and there’s a long list of things that they have to abide by, from child safety issues to illegal material. The UK is working on its own digital safety bill that would put in place a duty of care standard that says tech companies can’t hide behind the fact that it’s a big internet, it’s really complicated and they can’t do anything about it.

And look, we know this will work. Prior to the DMCA it was a free-for-all out there with copyright material. And the companies were like, look, this is not our problem. And when they passed the DMCA, everybody developed technology to find and remove copyright material.

It sounds like the auto industry as well. We didn’t have seat belts until we created regulation that required seat belts.

That’s right. I’ll also remind you that in the 1970s there was a card called a Ford Pinto where they put the gas tank in the wrong place. If somebody would bump into you, your car would explode and everybody would die. And what did Ford do? They said, OK, look, we can recall all the cars, fix the gas tank. It’s gonna cost this amount of dollars. Or we just leave it alone, let a bunch of people die, settle the lawsuits. It’ll cost less. That’s the calculation, it’s cheaper. The reason that calculation worked is because tort reform had not actually gone through. There were caps on these lawsuits that said, even when you knowingly allow people to die because of an unsafe product, we can only sue you for so much. And we changed that and it worked: products are much, much safer. So why do we treat the offline world in a way that we don’t treat the online world?

For the first 20 years of the internet, people thought that the internet was like Las Vegas. What happens on the internet stays on the internet. It doesn’t matter. But it does. There is no online and offline world. What happens on the online world very, very much has an impact on our safety as individuals, as societies and as democracies.

There’s some conversation about duty of care in the context of section 230 here in the US – is that what you envision as one of the solutions to this?

I like the way the EU and the UK are thinking about this. We have a huge problem on Capitol Hill, which is, although everybody hates the tech sector, it’s for very different reasons. When we talk about tech reform, conservative voices say we should have less moderation because moderation is bad for conservatives. The left is saying the technology sector is an existential threat to society and democracy, which is closer to the truth.

So what that means is the regulation looks really different when you think the problem is something other than what it is. And that’s why I don’t think we’re going to get a lot of movement at the federal level. The hope is that between [regulatory moves in] Australia, the EU, UK and Canada, maybe there could be some movement that would put pressure on the tech companies to adopt some broader policies that satisfy the duty here.

Source link

Continue Reading

Technology

Fastly buys Glitch web IDE • The Register

Voice Of EU

Published

on

Content delivery network Fastly is purchasing Glitch, the company behind the web-based IDE of the same name.

Glitch is a full-stack platform that officially supports JavaScript, but allows coding in CSS, HTML, and other languages as well. It’s designed to operate much like other cloud platforms and is able to run full-stack apps on demand, with Glitch handling all of the hardware and devs allowed to focus on coding.

By being absorbed into Fastly, Glitch vowed that the service will remain unchanged for users. “You’re good, we got you. Nothing changes about your apps or your Glitch account,” the company said in its announcement. It also said no employees would be lost in the merger.

Fastly focuses on edge-based delivery, which it says greatly speeds page load times. It was responsible for knocking a good portion of the internet offline last June thanks to a bug it introduced to its own system during a software deployment that caused 85 percent of its network traffic to return errors.

For its part, Fastly said that it wanted to buy Glitch after a partnership earlier this year brought Glitch to Compute@Edge, one of Fastly’s core products. Compute@Edge is a distributed application platform for running apps in edge environments on Fastly hardware.

As part of the deal, Fastly will integrate Glitch with its network, which will give Glitch users access to Fastly’s web application firewall, image optimization, and fast start times. Fastly also hopes to bring Glitch’s community into its own development process by gathering feedback shared by users.

Glitch started life in 2017 as a product under Fog Creek Software, founded in 2000 by Joel Spolsky (chairman and co-founder of Trello and Stack Overflow) and Michael Pryor (CEO at Trello, Stack Exchange board member, and head of Trello at Atlassian). Anil Dash, CEO of Glitch, joined the company in 2016; he was also previously on the Stack Overflow board. Dash will join Fastly as VP of developer experience after the merger, which has no announced close date.

Dash describes Glitch as a “yes-code” product, by which he means one that is the opposite, philosophically speaking, from no-code platforms. “As great as these no-code tools are, there are lots of meaningful problems, and joyful creations, that can only be addressed by writing code,” Dash said in a blog post.

What that means in practice is that Glitch is just an IDE that happens to live on the web. The company claims its “remixable” code allows a Glitch user to alter another’s publicly published project for their own purpose. Glitch’s website indicates that all public projects are able to be remixed, and private repositories are only available at the $8/month pro tier.

Glitch made news in early 2020 when its employees voted to form a union, which the company voluntarily recognized. This made Glitch the first tech company to sign a collective bargaining agreement with white-collar workers in the US.

We’ve asked Communication Workers of America (through which Glitch employees bargained) whether the union will survive the acquisition and will update the piece if we hear back. ®

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!