Connect with us

Technology

Apple’s M2 chip isn’t a slam dunk but shows its future • The Register

Voice Of EU

Published

on

Analysis For all the pomp and circumstance surrounding Apple’s move to homegrown silicon for Macs, the tech giant has admitted that the new M2 chip isn’t quite the slam dunk that its predecessor was when compared to the latest from Apple’s former CPU supplier, Intel.

During its WWDC 2022 keynote Monday, Apple focused its high-level sales pitch for the M2 on claims that the chip is much more power efficient than Intel’s latest laptop CPUs. But while doing so, the iPhone maker admitted that Intel has it beat, at least for now, when it comes to CPU performance.

Apple laid this out clearly during the presentation when Johny Srouji, Apple’s senior vice president of hardware technologies, said the M2’s eight-core CPU will provide 87 percent of the peak performance of Intel’s 12-core Core i7-1260P while using just a quarter of the rival chip’s power.

A graph showing that Apple's M2 provides 87 percent of the peak power of Intel's 12-core Core i7-1260P chip while using a quarter of the power.

A concession by Apple on CPU performance, but the M2 is much more power efficient. Click to enlarge.

In other words, Intel’s Core i7-1260P is nearly 15 percent faster than Apple’s M2, and that’s not even considering the fact that Intel has two more powerful i7s in its so-called P-series lineup: the higher-frequency i7-1270P, which has same number of cores, and the 14-core i7-1280P.  

The company did claim that the M2’s CPU is 1.9x faster than Intel’s 10-core Core i7-1255U while using the same amount of power, but while this may be a more appropriate comparison, the fact is that Apple doesn’t have a CPU for ultra-thin laptops that is as powerful as Intel’s best.

Regardless, Apple is claiming that performance-per-watt, where the M2 really shines, is the more important metric, building upon the original argument it made when the M1 debuted in 2020.

“Unlike others in the industry who significantly increase power to gain performance, our approach is different. We continue to have a relentless focus on power efficient performance. In other words, maximizing performance while minimizing power consumption,” Srouji said.

But performance-per-watt isn’t the only way Apple hopes the M2 will stand out when it lands in the MacBook Air and 13″ MacBook Pro next month.

The tech giant is also making a bigger bet on the chip’s GPU and neural engine because it believes an increasing share of applications in the future will be rely on graphics and AI, according to veteran semiconductor analyst Kevin Krewell of Tirias Research.

An image showing the specs of Apple's new M2 chip.

The M2 is still an impressive chip overall, especially its GPU and neural engine. Click to enlarge.

This is reflected by Apple’s decision to dedicate more transistors for the M2’s 10-core GPU and 16-core neural engine compared to the M1, Krewell said. These design decisions allowed the Mac maker to claim a 35 percent boost for the GPU and 40 percent boost for the neural engine compared to the M1. On the other hand, the M2’s CPU only improved by 18 percent in multi-threaded performance, according to Apple.

But even then, Krewell said, applications that are heavily reliant on the CPU, like web browsers, don’t have as great a need for faster chips, which is why he believes it’s important to put more weight on the GPU and neural engine since they could make a bigger difference.

“Web browsers don’t need a whole lot more performance, so the comparison on industry performance is probably less relevant in my mind, though overall power efficiency is good. Apple wants to show they are competitive with Intel and that in some ways they may be ahead with neural processing and better graphics,” Krewell told The Register.

A graph from Apple showing that the M2's GPU is 2.3x faster than Intel's Core i7-1255U.

The M2’s GPU is looking pretty good — if you believe Apple’s claims. Click to enlarge.

While Apple didn’t provide a competitive comparison for the M2’s neural engine, it did claim that the 10-core GPU is 2.3x faster than the integrated graphics within Intel’s Core i7-1255U while using the same power. Conversely, Apple said the M2’s GPU can provide the peak performance of the i7-1255U while using only one-fifth of the power. The caveat is that Apple didn’t provide a comparison to the i7-1260P, which does have a faster built-in GPU than the i7-1255U.

By its own admission, Apple may not have the fastest CPU in the industry for an ultra-light laptop. But its bigger emphasis on the GPU and neural engine lends to the growing trend in the compute world that having a faster central brain may be less important than having dedicated accelerators for increasingly important areas like AI and graphics. ®

Source link

Technology

.NET 6 comes to Ubuntu 22.04 • The Register

Voice Of EU

Published

on

Ubuntu and Microsoft have brought .NET 6 to the Ubuntu repositories, meaning that you can install it without adding any extra sources to the OS.

The announcement means that Ubuntu 22.04 is catching up with the Red Hat Linux family. As per Microsoft’s online docs, you could already do this on Fedora 36 as well as the more business-like variants: RHEL 8, CentOS Stream 8 and 9, and via scl-utils on RHEL 7.

Microsoft’s blog post about the news also mentions the ability to install the runtime, or the full SDK, into Ubuntu containers. Canonical also has new versions of these. It describes Ubuntu ROCKs as “new, ultra-small OCI-compliant appliance images, without a shell or package manager,” smaller than existing Ubuntu container images thanks to a new tool called chisel.

.NET 6 is Microsoft’s cross-platform toolchain for building apps to run on multiple platforms, including Windows, Linux, macOS, and mobile OSes. Essentially, it’s Microsoft’s answer to Oracle’s JVM – the increasingly inaccurately named Java Virtual Machine, which now supports multiple languages, including Clojure, Kotlin, Scala, and Groovy.

Microsoft’s own list of .NET languages is relatively short – C#, F#, and Visual Basic – although there are many others from outside the company. The list arguably should include PowerShell, but that already has its own Linux version.

Since 2014 or so, .NET primarily means what was formerly called .NET Core. According to Microsoft’s own diagram, that means the .NET Common Language Runtime, the bit which allows “managed code” to execute, and Microsoft’s web app framework ASP.NET.

There are three separate packages: dotnet-sdk-6.0, the SDK; dotnet-runtime-6.0, the CLR runtime; and aspnetcore-runtime-6.0, the runtime for ASP.NET. All three can be installed at once via the dotnet6 metapackage.

The notable bits of .NET that aren’t included in Core are the venerable Windows Forms framework or the slightly more modern Windows Presentation Framework, WPF.

Compare and contrast: .NET Framework versus .NET Core

Diagram showing .NET Core design

Click to enlarge

So don’t get excited and think that the inclusion of .NET in Ubuntu means that graphical .NET apps, such as Windows Store apps, can now be built and run natively on Linux. Limit your expectations to server-side stuff. This is a mainly a way to deploy console-based C# and ASP.NET apps into Ubuntu servers and Ubuntu containers.

When we asked Canonical about this, a spokesperson responded: “WPF is not currently supported in .NET 6 on Ubuntu. So, you’re correct that .NET 6 on Ubuntu is aimed at developers building text/server apps rather than graphical/GUI apps.”

We’ve also asked Microsoft if they have any additional information or details, and will update when they respond.

There are cross-platform graphical frameworks for .NET, including the open-source Avalonia and as well as Uno, which got on board in .NET 5. There is also Microsoft’s own Multi-platform App UI, or MAUI, which evolved out of Xamarin Forms.

The origins of .NET lie in Microsoft’s 1996 acquisition of Colusa Software for its OmniWare tool, which Colusa billed as “a universal substrate for web programming.” As Microsoft faced off against the US Department of Justice and European Commission, and the possibility of being broken into separate apps and OS divisions, it came up with Next Generation Windows Services, which then turned into .NET: a way to use Microsoft tools to build apps for any OS.

There is still controversy over exactly how open .NET really is, as exemplified by the aptly named isdotnetopen site. ®

Source link

Continue Reading

Technology

How cognitive science can be used to bring AI forward

Voice Of EU

Published

on

Dr Marie Postma spoke to SiliconRepublic.com about misconceptions around AI as well its relationship with human consciousness.

AI and robots are getting ‘smarter’ all the time. From Irish-made care robot Stevie to Spot the robot dog from Boston Dynamics, these tech helpers are popping up everywhere with a wide range of uses.

The tech beneath the hardware is getting smarter too. Earlier this year, Researchers at MIT developed a simpler way to teach robots new skills after only a few physical demonstrations. And just this week, Google revealed how its combining large language models with its parent company’s Everyday Robots to help them better understand humans.

However, the advances in these areas have led to recent discussions around the idea of sentient AI. While this idea has been largely rebuffed by the AI community, an understanding of the relationship between cognitive science and AI is an important one.

Dr Marie Postma is head of the department of cognitive science and artificial intelligence at Tilburg School of Humanities and Digital Sciences in the Netherlands.

The department is mainly financed by three education programmes and has around 100 staff and between 900 and 1,000 students.

‘Technology is not the problem; people are the problem’
– MARIE POSTMA

The team focuses on different research themes that combine cognitive science and AI, such as computational linguistics with a big focus on deep learning solutions, autonomous agents and robotics, and human-AI interaction, which is mainly focused on VR and its use in education.

Postma was a speaker at the latest edition of the Schools for Female Leadership in the Digital Age in Prague, run by Huawei’s European Leadership Academy.

Postma spoke to the 29 students about cognitive science and machine learning, starting with the history of AI and bringing it up to the modern-day challenges, such as how we can model trust in robots and the role empathy could play in AI.

“We have research where we are designing first-person games where people can experience the world from the perspective of an animal – not a very cuddly animal, it’s actually a beaver. That’s intentional,” she told me later that day.

Sentient AI

Her talk brought about a lot of discussion around AI and consciousness, a timely discussion following the news that Blake Lemione, a Google engineer, published an interview with the AI chatbot and claimed that it had become sentient.

Postma said much of the media coverage around this story had muddied the waters. “The way it was described in the media was more focused on the Turing test – interacting with an AI system that comes across as being human-like,” she said.

“But then at some point they mention consciousness, and consciousness is really a different story.”

Postma said that most people who research consciousness would agree that it’s based on a number of factors. Firstly it’s about having a perceptual basis, both the ability to perceive the world around us but also what’s happening inside us and being self-aware.

Secondly, the purpose of consciousness is being able to interpret yourself as someone who has feelings, needs, actionability in the world and a need to stay alive. “AI systems are not worried about staying alive, at least the way we construct them now, they don’t reflect on their battery life and think ‘oh no, I should go plug myself in’.”

Possibilities and limitations

While AI and robots don’t have consciousness, their ability to be programmed to a point where they can understand humans can be highly beneficial.

For example, Postma’s department has been conducting research that concerns brain-computer interaction, with a focus on motor imagery. “[This is] trying to create systems where the user, by focusing on their brain signal, can move objects in virtual reality or on computer screens using [electroencephalography].”

This has a lot of potential applications in the medical world for people who suffer from paralysis or in the advancements of prosthetic limbs.

Last year, researchers at Stanford University successfully implanted a brain-computer interface (BCI) capable of interpreting thoughts of handwriting in a 65-year-old man paralysed below the neck due to a spinal cord injury.

However, Postma said there is still a long way to go with this technology and it’s not just about the AI itself. “The issue with that is there are users who are able to do that and others who are not, and we don’t really know what the reasons are,” she said.

“There is some research that suggests that being able to do special rotation might be one of the factors but what we’re trying to discover is how we can actually train users so that they can use BCI.”

And in the interest of quelling any lingering fears around sentient AI, she also said people should not worry about this kind of technology being able to read their thoughts because the BCI is very rudimentary. “For the motor imagery BCI, it’s typically about directions, you know, right, left, etc.”

Other misconceptions about AI

Aside from exactly how smart the robots around us really are, one of the biggest falsehoods that Postma wants to correct is that the technology itself is not necessarily what causes the problems that surround it.

“What I repeat everywhere I go, is that the technology is not the problem, people are the problem. They’re the ones who create the technology solutions and use them in a certain way and who regulate them or don’t regulate them in a certain way,” she said.

“The bias in some AI solutions is not there because some AI solutions are biased, they’re biased because the data that’s used to create the solutions is biased so there is human bias going in.”

However, while bias in AI has been a major discussion topic for several years, Postma has an optimistic view on this, saying that these biased systems are actually helping to uncover biased data that would have previously been hidden behind human walls.

“It becomes explicit because all the rules are there, all the predictive features are there, even for deep learning architecture, we have techniques to simplify them and to uncover where the decision is made.”

While Postma is a major advocate for all the good AI can do, she is also concerned about how certain AI and data is used, particularly in how it can influence human decisions in politics.

“What Cambridge Analytica did – just because you can, doesn’t mean you should. And I don’t think they’re the only company that are doing that,” she said.

“I’m [also] concerned about algorithms that make things addictive, whether it’s social media or gaming, that really try to satisfy the user. I’m concerned about what it’s doing to kids.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Technology

‘I’m buying Manchester United’: Elon Musk ‘joke’ tweet charges debate over struggling club’s future | Elon Musk

Voice Of EU

Published

on

Tesla billionaire Elon Musk briefly electrified the debate about the future of Manchester United by claiming on Twitter that he is buying the struggling Premier League club – before saying that the post was part of a “long-running joke”.

He did not make clear his views on new coach Eric ten Hag’s controversial insistence on passing out from the back, or whether unhappy star striker Cristiano Ronaldo should be allowed to leave, but he did say that if he were to buy a sports team “it would be Man U. They were my fav team as a kid”.

With the team rooted to the bottom of the league after a humiliating 4-0 away defeat to Brentford, the outspoken entrepreneur’s tweet offered hope – however –briefly – to fans who want to see the back of current owners, the Florida-based Glazer family.

Also, I’m buying Manchester United ur welcome

— Elon Musk (@elonmusk) August 17, 2022

Musk has a history of making irreverent tweets, and he later clarified the post by saying he was not buying sports teams.

No, this is a long-running joke on Twitter. I’m not buying any sports teams.

— Elon Musk (@elonmusk) August 17, 2022

Buying United, one of the biggest football clubs in the world, would have cost Musk at least £2bn, according to its current stock market valuation.

Manchester United’s recent on-pitch woes have led to increased fan protests against the Glazers, who bought the club in a heavily leveraged deal in 2005 for £790m ($955.51m).

The anti-Glazer movement gained momentum last year after United were involved in a failed attempt to form a breakaway European Super League.

But a takeover by Musk would have been a case of out of the frying pan and into the fire for the club, given the billionaire’s tendency for off-the-cuff remarks and falling foul of market regulators.

Many were quick to point out that Musk had also promised to buy Twitter for $44bn before the deal collapsed in July, and has also boasted about colonising Mars and boosting birthrates on Earth.

That’s what you said about Twitter.

— Sema (@_SemaHernandez_) August 17, 2022

Fans responded with a mixture of bafflement and optimism given the lowly status of a club used to occupying the top places in the league rather than the bottom.

Manchester United did not immediately respond to a request for comment.



Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!