Connect with us

Technology

First-ever James Webb Space Telescope image revealed • The Register

Voice Of EU

Published

on

Pic On Monday, NASA released its first image from the James Webb Space Telescope, providing the sharpest and deepest glimpse yet of distant, early galaxies.

The telescope blasted off from Earth at the end of December, and about a month later, the probe arrived at its new home about a million miles from our planet to begin work. It picked up its first starlight in early February. Now it’s taken a full picture.

Revealing the infrared snap at a press conference that concluded in the past hour, US President Joe Biden said: “Six and a half months ago a rocket launcher carried the world’s newest, most powerful deep space telescope on a journey one million miles into the cosmos, unfolding itself deploying a mirror 21-feet-wide, a sunshield the size of a tennis court, and 250,000 tiny shutters each one smaller than a grain of sand.

“Put together, it’s a new window into the history of our universe. And today we’re gonna get a glimpse of the first light to shine through that window.”

The picture, dubbed Webb’s First Deep Field, shows SMACS 0723, a cluster of galaxies in the foreground magnifying the light from more distant galaxies in the background. Each blob is a source of light, there are thousands of galaxies in the image including some of which have never been observed in infrared light before until now. The light from SMACS 0723 has taken 4.6 billion years to reach the James Webb Space Telescope, or JWST.

webbs_first_deep_field

Distant galaxies captured by the James Webb Space Telescope in its first-released snap … Click for bigger or check this out. Image source: NASA, ESA, CSA, STScI

NASA’s Administrator Bill Nelson said the snapshot covered a region of space equivalent to “a grain of sand on the tip of your finger at arm’s length.” You can find higher-resolution versions here.

“You’re seeing just a small little portion of the universe,” he said. “You’re seeing galaxies that are shining around other galaxies. You know, a hundred years ago, we thought there was only one galaxy. Now, the number is unlimited.”

Webb’s First Deep Field is a composite image made up of snapshots taken at different wavelengths over 12.5 hours. It is the deepest and sharpest infrared image of the distant universe to date. 

Launched on Christmas Day 2021, the JWST orbits the second Sun-Earth Lagrange point, a region known as L2, keeping it out of our planet’s shadow. The $10 billion telescope is a marvel of engineering, complete with the aforementioned sunshield made out of thin layers of Kapton for its base. 

A panel of gold-plated hexagonal-shaped mirrors sits atop. Stretching 6.5 metres across, each of the 18 mirrors is supported by struts and motorized actuators, allowing each mirror to move with six degrees of freedom. They have to be perfectly aligned with one another at 1/10,000th the thickness of a human hair to create one giant primary mirror capable of focusing light from distant objects more than 13 billion light-years away.

AI to help study first images from James Webb Space Telescope

READ MORE

Light reflected from this mirror is refocused by a tiny secondary mirror, just 0.74 metres in diameter attached at the end of three long arms. Photons are directed to a series of instruments, including a near-infrared camera and spectrograph, a combined mid-infrared camera and spectrograph system, and a field guidance sensor that helps point the telescope at relevant targets. These instruments sit behind the primary mirror.

The photo is just one of the first few targets astronomers selected to study in the first run of the JWST’s science operations. Other images will be revealed by space agencies on Tuesday, detailing: the Carina Nebula, a bright stellar nursery; the Southern Ring Nebula, a thick envelope of dust and gas around a dying star; and Stephan’s Quintet, a compact galaxy group. The JWST doesn’t just provide astronomers with images. The light spectrum of WASP-96 b, is also to be expected to be released tomorrow, will allow researchers to probe the chemical composition of the gas giant exoplanet. 

“When NASA launched the Hubble Space Telescope in 1990, we were able to see the stars unobstructed by the Earth’s atmosphere and understand the universe in ways we could have never imagined even a few decades earlier,” US Vice President Kamala Harris said during the briefing. 

“And now we enter a new phase of scientific discovery. Building on the legacy of Hubble, the James Webb Space Telescope allows us to see deeper into space than ever before, and in stunning clarity. It will enhance what we know about the origins of our universe. Our solar system, and possibly life itself.” ®

Source link

Technology

.NET 6 comes to Ubuntu 22.04 • The Register

Voice Of EU

Published

on

Ubuntu and Microsoft have brought .NET 6 to the Ubuntu repositories, meaning that you can install it without adding any extra sources to the OS.

The announcement means that Ubuntu 22.04 is catching up with the Red Hat Linux family. As per Microsoft’s online docs, you could already do this on Fedora 36 as well as the more business-like variants: RHEL 8, CentOS Stream 8 and 9, and via scl-utils on RHEL 7.

Microsoft’s blog post about the news also mentions the ability to install the runtime, or the full SDK, into Ubuntu containers. Canonical also has new versions of these. It describes Ubuntu ROCKs as “new, ultra-small OCI-compliant appliance images, without a shell or package manager,” smaller than existing Ubuntu container images thanks to a new tool called chisel.

.NET 6 is Microsoft’s cross-platform toolchain for building apps to run on multiple platforms, including Windows, Linux, macOS, and mobile OSes. Essentially, it’s Microsoft’s answer to Oracle’s JVM – the increasingly inaccurately named Java Virtual Machine, which now supports multiple languages, including Clojure, Kotlin, Scala, and Groovy.

Microsoft’s own list of .NET languages is relatively short – C#, F#, and Visual Basic – although there are many others from outside the company. The list arguably should include PowerShell, but that already has its own Linux version.

Since 2014 or so, .NET primarily means what was formerly called .NET Core. According to Microsoft’s own diagram, that means the .NET Common Language Runtime, the bit which allows “managed code” to execute, and Microsoft’s web app framework ASP.NET.

There are three separate packages: dotnet-sdk-6.0, the SDK; dotnet-runtime-6.0, the CLR runtime; and aspnetcore-runtime-6.0, the runtime for ASP.NET. All three can be installed at once via the dotnet6 metapackage.

The notable bits of .NET that aren’t included in Core are the venerable Windows Forms framework or the slightly more modern Windows Presentation Framework, WPF.

Compare and contrast: .NET Framework versus .NET Core

Diagram showing .NET Core design

Click to enlarge

So don’t get excited and think that the inclusion of .NET in Ubuntu means that graphical .NET apps, such as Windows Store apps, can now be built and run natively on Linux. Limit your expectations to server-side stuff. This is a mainly a way to deploy console-based C# and ASP.NET apps into Ubuntu servers and Ubuntu containers.

When we asked Canonical about this, a spokesperson responded: “WPF is not currently supported in .NET 6 on Ubuntu. So, you’re correct that .NET 6 on Ubuntu is aimed at developers building text/server apps rather than graphical/GUI apps.”

We’ve also asked Microsoft if they have any additional information or details, and will update when they respond.

There are cross-platform graphical frameworks for .NET, including the open-source Avalonia and as well as Uno, which got on board in .NET 5. There is also Microsoft’s own Multi-platform App UI, or MAUI, which evolved out of Xamarin Forms.

The origins of .NET lie in Microsoft’s 1996 acquisition of Colusa Software for its OmniWare tool, which Colusa billed as “a universal substrate for web programming.” As Microsoft faced off against the US Department of Justice and European Commission, and the possibility of being broken into separate apps and OS divisions, it came up with Next Generation Windows Services, which then turned into .NET: a way to use Microsoft tools to build apps for any OS.

There is still controversy over exactly how open .NET really is, as exemplified by the aptly named isdotnetopen site. ®

Source link

Continue Reading

Technology

How cognitive science can be used to bring AI forward

Voice Of EU

Published

on

Dr Marie Postma spoke to SiliconRepublic.com about misconceptions around AI as well its relationship with human consciousness.

AI and robots are getting ‘smarter’ all the time. From Irish-made care robot Stevie to Spot the robot dog from Boston Dynamics, these tech helpers are popping up everywhere with a wide range of uses.

The tech beneath the hardware is getting smarter too. Earlier this year, Researchers at MIT developed a simpler way to teach robots new skills after only a few physical demonstrations. And just this week, Google revealed how its combining large language models with its parent company’s Everyday Robots to help them better understand humans.

However, the advances in these areas have led to recent discussions around the idea of sentient AI. While this idea has been largely rebuffed by the AI community, an understanding of the relationship between cognitive science and AI is an important one.

Dr Marie Postma is head of the department of cognitive science and artificial intelligence at Tilburg School of Humanities and Digital Sciences in the Netherlands.

The department is mainly financed by three education programmes and has around 100 staff and between 900 and 1,000 students.

‘Technology is not the problem; people are the problem’
– MARIE POSTMA

The team focuses on different research themes that combine cognitive science and AI, such as computational linguistics with a big focus on deep learning solutions, autonomous agents and robotics, and human-AI interaction, which is mainly focused on VR and its use in education.

Postma was a speaker at the latest edition of the Schools for Female Leadership in the Digital Age in Prague, run by Huawei’s European Leadership Academy.

Postma spoke to the 29 students about cognitive science and machine learning, starting with the history of AI and bringing it up to the modern-day challenges, such as how we can model trust in robots and the role empathy could play in AI.

“We have research where we are designing first-person games where people can experience the world from the perspective of an animal – not a very cuddly animal, it’s actually a beaver. That’s intentional,” she told me later that day.

Sentient AI

Her talk brought about a lot of discussion around AI and consciousness, a timely discussion following the news that Blake Lemione, a Google engineer, published an interview with the AI chatbot and claimed that it had become sentient.

Postma said much of the media coverage around this story had muddied the waters. “The way it was described in the media was more focused on the Turing test – interacting with an AI system that comes across as being human-like,” she said.

“But then at some point they mention consciousness, and consciousness is really a different story.”

Postma said that most people who research consciousness would agree that it’s based on a number of factors. Firstly it’s about having a perceptual basis, both the ability to perceive the world around us but also what’s happening inside us and being self-aware.

Secondly, the purpose of consciousness is being able to interpret yourself as someone who has feelings, needs, actionability in the world and a need to stay alive. “AI systems are not worried about staying alive, at least the way we construct them now, they don’t reflect on their battery life and think ‘oh no, I should go plug myself in’.”

Possibilities and limitations

While AI and robots don’t have consciousness, their ability to be programmed to a point where they can understand humans can be highly beneficial.

For example, Postma’s department has been conducting research that concerns brain-computer interaction, with a focus on motor imagery. “[This is] trying to create systems where the user, by focusing on their brain signal, can move objects in virtual reality or on computer screens using [electroencephalography].”

This has a lot of potential applications in the medical world for people who suffer from paralysis or in the advancements of prosthetic limbs.

Last year, researchers at Stanford University successfully implanted a brain-computer interface (BCI) capable of interpreting thoughts of handwriting in a 65-year-old man paralysed below the neck due to a spinal cord injury.

However, Postma said there is still a long way to go with this technology and it’s not just about the AI itself. “The issue with that is there are users who are able to do that and others who are not, and we don’t really know what the reasons are,” she said.

“There is some research that suggests that being able to do special rotation might be one of the factors but what we’re trying to discover is how we can actually train users so that they can use BCI.”

And in the interest of quelling any lingering fears around sentient AI, she also said people should not worry about this kind of technology being able to read their thoughts because the BCI is very rudimentary. “For the motor imagery BCI, it’s typically about directions, you know, right, left, etc.”

Other misconceptions about AI

Aside from exactly how smart the robots around us really are, one of the biggest falsehoods that Postma wants to correct is that the technology itself is not necessarily what causes the problems that surround it.

“What I repeat everywhere I go, is that the technology is not the problem, people are the problem. They’re the ones who create the technology solutions and use them in a certain way and who regulate them or don’t regulate them in a certain way,” she said.

“The bias in some AI solutions is not there because some AI solutions are biased, they’re biased because the data that’s used to create the solutions is biased so there is human bias going in.”

However, while bias in AI has been a major discussion topic for several years, Postma has an optimistic view on this, saying that these biased systems are actually helping to uncover biased data that would have previously been hidden behind human walls.

“It becomes explicit because all the rules are there, all the predictive features are there, even for deep learning architecture, we have techniques to simplify them and to uncover where the decision is made.”

While Postma is a major advocate for all the good AI can do, she is also concerned about how certain AI and data is used, particularly in how it can influence human decisions in politics.

“What Cambridge Analytica did – just because you can, doesn’t mean you should. And I don’t think they’re the only company that are doing that,” she said.

“I’m [also] concerned about algorithms that make things addictive, whether it’s social media or gaming, that really try to satisfy the user. I’m concerned about what it’s doing to kids.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Technology

‘I’m buying Manchester United’: Elon Musk ‘joke’ tweet charges debate over struggling club’s future | Elon Musk

Voice Of EU

Published

on

Tesla billionaire Elon Musk briefly electrified the debate about the future of Manchester United by claiming on Twitter that he is buying the struggling Premier League club – before saying that the post was part of a “long-running joke”.

He did not make clear his views on new coach Eric ten Hag’s controversial insistence on passing out from the back, or whether unhappy star striker Cristiano Ronaldo should be allowed to leave, but he did say that if he were to buy a sports team “it would be Man U. They were my fav team as a kid”.

With the team rooted to the bottom of the league after a humiliating 4-0 away defeat to Brentford, the outspoken entrepreneur’s tweet offered hope – however –briefly – to fans who want to see the back of current owners, the Florida-based Glazer family.

Also, I’m buying Manchester United ur welcome

— Elon Musk (@elonmusk) August 17, 2022

Musk has a history of making irreverent tweets, and he later clarified the post by saying he was not buying sports teams.

No, this is a long-running joke on Twitter. I’m not buying any sports teams.

— Elon Musk (@elonmusk) August 17, 2022

Buying United, one of the biggest football clubs in the world, would have cost Musk at least £2bn, according to its current stock market valuation.

Manchester United’s recent on-pitch woes have led to increased fan protests against the Glazers, who bought the club in a heavily leveraged deal in 2005 for £790m ($955.51m).

The anti-Glazer movement gained momentum last year after United were involved in a failed attempt to form a breakaway European Super League.

But a takeover by Musk would have been a case of out of the frying pan and into the fire for the club, given the billionaire’s tendency for off-the-cuff remarks and falling foul of market regulators.

Many were quick to point out that Musk had also promised to buy Twitter for $44bn before the deal collapsed in July, and has also boasted about colonising Mars and boosting birthrates on Earth.

That’s what you said about Twitter.

— Sema (@_SemaHernandez_) August 17, 2022

Fans responded with a mixture of bafflement and optimism given the lowly status of a club used to occupying the top places in the league rather than the bottom.

Manchester United did not immediately respond to a request for comment.



Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!