Connect with us

Technology

Ukraine gets closer to NATO with cybersecurity pact • The Register

Avatar

Published

on

Ukraine has taken another step toward deepening its ties to NATO by signing an agreement to formalize its participation in the security alliance’s Joint Center for Advanced Technologies in Cyber Defense (CCDCOE).  

The CCDCOE functions as a cyber-defense knowledge hub, research institution, and training and exercise facility that assists members with technology, threat-sharing and policy expertise. CCDCOE membership is not limited to NATO nations.

Ukraine submitted its application to join the Estonia-based center in August 2021. Last April, the 27 sponsoring nations in the steering committee unanimously endorsed Ukraine as a contributing participant in the CCDCOE — thus giving the other member state’s access to Ukraine’s “valuable first-hand knowledge of several adversaries”.

That language was a nod to both the cyberwarfare tactics Russia employed ahead of and during its illegal invasion of Ukraine, and Moscow’s earlier attacks against Ukraine’s power grids and other digital targets.

The newer technical agreement, which must be signed by all of the center’s member countries, would formalize Ukraine’s participation in the cyber-defense group.

“During the past year, we already actively cooperated with the United Center of Advanced Technologies for Cyber Defense of NATO,” Ukraine’s Yuriy Shchygol, head of state special forces, said in a statement.

Indeed, Shchygol’s country has been ground zero for countering Russian cyberattacks. The Computer Emergency Response Team of Ukraine (CERT-UA) tracked 2,100 incidents and cyberattacks last year alone, and more than 1,500 of those occurred after Russia’s full-scale military invasion in February.

The CCDCOE director and its international relations chief visited Ukraine in November 2022 to discuss its experience countering Russian cyberattacks. “I hope that our cooperation will only strengthen this year,” Shchygol added.

The Register asked the center’s Baltic member states for comment and did not immediately receive any response.

Tom Kellermann, senior VP of cyber strategy at software vendor Contrast Security, who has also held cyber posts in the US government, called the move “momentous.” For one thing, Ukraine can use the center to share what it’s learned from weathering the Kremlin’s cyberattacks.

“Ukraine has been under siege by coordinated destructive Russian cyberattacks since January 13, 2022. This will greatly enhance NATO’s situational awareness per the campaigns of the elite Russian APT groups, thus allowing NATO to harden critical infrastructures from burgeoning Russian cyber campaigns,” he told The Register.

It also signifies a “dramatic shift” in both the US and NATO’s doctrine related to offensive cyber-campaigns intended to disrupt Russian attacks, Kellermann added. 

“Since 2013 when General Gerasimov gave his famous speech on hybrid warfare and the utility of cyber-attacks, Russia has been attacking Ukraine and NATO members with relative impunity from a collective cyber-response,” he said. “Now Russia will have to play defense.” ®

 

Source link

Technology

Microsoft’s Activision Blizzard acquisition will harm UK gamers, says watchdog | Microsoft

Avatar

Published

on

The UK’s competition regulator has ruled that Microsoft’s $68.7bn (£59.6bn) deal to buy Activision Blizzard, the video game publisher behind hits including Call of Duty, will result in higher prices and less competition for UK gamers.

The Competition and Markets Authority (CMA), which launched an in-depth investigation in September after raising a host of concerns about the biggest takeover in tech history, said the deal would weaken the global rivalry between Microsoft’s Xbox and Sony’s PlayStation consoles.

“Our job is to make sure that UK gamers are not caught in the crossfire of global deals that, over time, could damage competition and result in higher prices, fewer choices, or less innovation,” said Martin Coleman, the chair of the independent panel of experts conducting the investigation. “We have provisionally found that this may be the case here.”

The CMA said possible remedies to address competition issues included selling or spinning off the business that makes Call of Duty, or the entire Activision arm of the combined Activision Blizzard.

However, the watchdog acknowledged that a spin-off into a standalone operation would mean the new business “may not have sufficient assets and resources to operate as an independent entity”.

While the CMA did not completely rule out measures short of a divestiture – for example a “behavioural remedy” such as an iron-clad licence to guarantee distribution of Call of Duty to Sony – it said a structural solution such as a partial sale, spin-off or completely blocking the deal was its preferred option.

“We are of the initial view that any behavioural remedy in this case is likely to present material effectiveness risks,” it said. “At this stage, the CMA considers that certain divestitures and/or prohibition are, in principle, feasible remedies in this case.”

The CMA said there was a risk under the deal that Microsoft could try to make Call of Duty, Activision’s flagship game and one of the most popular and profitable global franchises of all time, exclusively available to Xbox console owners.

Last year, Microsoft attempted to allay competition concerns saying it would offer its rival Sony a 10-year licence to ensure the title stayed on its Playstation consoles.

However, following Microsoft’s $7.5bn acquisition of ZeniMax in 2020, the parent of studios behind games including The Elder Scrolls, Fallout and Doom, Microsoft moved to make some titles exclusive to its own devices.

The company had previously assured European regulators that it had no incentive to make such a move.

“Microsoft would find it commercially beneficial to make Activision’s games exclusive to its own consoles, or only available on PlayStation under materially worse conditions,” the CMA said. “This strategy, of buying gaming studios and making their content exclusive to Microsoft’s platforms, has been used by Microsoft following several previous acquisitions of games studios.”

The CMA said the end result could be that gamers would see “higher prices, reduced range, lower quality, and worse service in gaming consoles over time”.

skip past newsletter promotion

Microsoft said that it believed its 10-year guarantee to continue to offer Call of Duty to rivals on equal terms would be enough to allay competition concerns.

“We are committed to offering effective and easily enforceable solutions that address the CMA’s concerns,” said Rima Alaily, the corporate vice-president and deputy general counsel at Microsoft. “Our commitment to grant long-term 100% equal access to Call of Duty to Sony, Nintendo, Steam and others preserves the deal’s benefits to gamers and developers and increases competition in the market.”

The CMA’s ruling is of critical importance as it comes before the publication of official findings of investigations conducted by the European Commission and the US Federal Trade Commission, which in December launched legal action to block the deal.

“We hope between now and April we will be able to help the CMA better understand our industry,” said a spokesperson for Activision Blizzard. “To ensure they can achieve their stated mandate to promote an environment where people can be confident they are getting great choices and fair deals, where competitive, fair-dealing business can innovate and thrive, and where the whole UK economy can grow productively and sustainably.”

Microsoft’s all-cash offer for Activision Blizzard, which also publishes global hits such as World of Warcraft and Candy Crush, dwarfs its previous biggest deal, the $26bn takeover of LinkedIn in 2016.

The purchase would result in the Xbox maker becoming the world’s third-biggest gaming company by revenue behind China’s Tencent and Japan’s Sony, the maker of PlayStation games consoles. It is also the biggest deal in tech history, eclipsing the $67bn paid by Dell to buy the digital storage company EMC in 2015.

Source link

Continue Reading

Technology

Could RISC-V become a force in HPC? We talk to the experts • The Register

Avatar

Published

on

Analysis The RISC-V architecture looks set to become more prevalent in the high performance computing (HPC) sector, and could even become the dominant architecture, at least according to some technical experts in the field.

Meanwhile, the European High Performance Computing Joint Undertaking (EuroHPC JU) has just announced a project aimed at the development of HPC hardware and software based on RISC-V, with plans to deploy future exascale and post-exascale supercomputers based on this technology.

RISC-V has been around for at least a decade as an open source instruction set architecture (ISA), while actual silicon implementations of the ISA have been coming to market over the past several years.

Among the attractions of this approach are that the architecture is not only free to use, but can also be extended, meaning that application-specific functions can be added to a RISC-V CPU design, and accessed by adding custom instructions to the standard RISC-V set.

This latter could prove to be a driving factor for broader adoption of RISC-V in the HPC sector, according to Aaron Potler, Distinguished Engineer at Dell Technologies.

“There’s synergy and growing strength in the RISC-V community in HPC,” Potler said, “and so RISC-V really does have a very, very good chance to become more prevalent on HPC.”

Potler was speaking in a Dell HPC Community online event, outlining perspectives from Dell’s Office of the Chief Technology and Innovation Officer.

However, he conceded that to date, RISC-V has not really made much of a mark in the HPC sector, largely because it wasn’t initially designed with that purpose in mind, but that there is “some targeting now to HPC” because of the business model it represents.

He made a comparison of sorts with Linux, which like RISC-V, started off as a small project, but which grew and grew in popularity because of its open nature (it was also free to download and run, as Potler acknowledged).

“Nobody would have thought then that Linux would run on some high end computer. When in 1993, the TOP500 list came out, there was only one Linux system on it. Nowadays, all the systems on the TOP500 list run Linux. Every single one of them. It’s been that way for a few years now,” he said.

If Linux wasn’t initially targeting the HPC market, but was adopted for it because of its inherent advantages, perhaps the same could happen with RISC-V, if there are enough advantages, such as it being an open standard.

“If that’s what the industry wants, then the community is going to make it work, it’s gonna make it happen,” Potler said.

He also made a comparison with the Arm architecture, which eventually propelled Fujitsu’s Fugaku supercomputer to the number one slot in the TOP500 rankings, and which notably accomplished this by extending the instruction set to support the 512bit Scalable Vector Engine units in the A64FX processor.

“So why wouldn’t a RISC-V-based system be number one on the TOP500 someday?” he asked.

There has already been work done on RISC-V instructions and architecture extensions relating to HPC, Potler claimed, especially those for vector processing and floating point operations.

All of this means that RISC-V has potential, but could it really make headway in the HPC sector, which once boasted systems with a variety of processor architectures but is now dominated almost entirely by X86 and Arm?

“RISC-V does have the potential to become the architecture of choice for the HPC market,” said Omdia chief analyst Roy Illsley. “I think Intel is losing its control of the overall market and the HPC segment is becoming more specialized.”

Illsley pointed out that RISC-V’s open-source nature means that any chipmaker can produce RISC-V-based designs without paying royalties or licensing fees, and that is supported by many silicon makers as well as by open-source operating systems.

Manoj Sukumaran, Principal Analyst for Datacenter Compute & Networking at Omdia agreed, saying that the biggest advantage for RISC-V is that its non-proprietary architecture lines up well with the technology sovereignty goals of various countries. “HPC Capacity is a strategic advantage to any country and it is an inevitable part of a country’s scientific and economic progress. No country wants to be in a situation like China or Russia and this is fueling RISC-V adoption,” he claimed.

RISC-V is also a “very efficient and compelling instruction set architecture” and the provision to customize it for specific computing needs with additional instructions makes it agile as well, according to Sukumaran.

The drive for sovereignty, or at least greater self-reliance, could be one motive behind the call from the EuroHPC JU for a partnership framework to develop HPC hardware and software based on RISC-V as part of EU-wide ecosystem.

This is expected to be followed up by an ambitious plan of action for building and deploying exascale and post-exascale supercomputers based on this technology, according to the EuroHPC JU.

It stated in its announcement that the European Chips Act identified RISC-V as one of the next-generation technologies where investment should be directed in order to preserve and strengthen EU leadership in research and innovation. This will also reinforce the EU’s capacity for the design, manufacturing and packaging of advanced chips, and the ability to turn them into manufactured products.

High-performance RISC-V designs already exist from chip companies such as SiFive and Ventana, but these are typically either designs that a customer can take and have manufactured by a foundry company such as TSMC, or available as a chiplet that can be combined with others to build a custom system-on-chip (SoC) package, which is Ventana’s approach.

Creating a CPU design with custom instructions to accelerate specific functions would likely be beyond the resources of most HPC sites, but perhaps not a large user group or forum. However, a chiplet approach could de-risk the project somewhat, according to IDC Senior Research Director for Europe, Andrew Buss.

“Rather than trying to do a single massive CPU, you can assemble a SoC from chiplets, getting your CPU cores from somewhere and an I/O hub and other functions from elsewhere,” he said, although he added that this requires standardized interfaces to link the chiplets together.

But while RISC-V has potential, the software ecosystem is more important, according to Buss. “It doesn’t matter what the underlying microarchitecture is, so long as there is a sufficient software ecosystem of applications and tools to support it,” he said.

Potler agreed with this point, saying that “One of the most critical parts for HPC success is the software ecosystem. Because we’ve all worked on architectures where the software came in second, and it was a very frustrating time, right?”

Developer tools, especially compilers, need to be “solid, they need to scale, and they need to understand the ISA very well to generate good code,” he said.

This also plays a part in defining custom instructions, as these calls for a profiler or some performance analysis tools to identify time consuming sequences of code in the applications in use and gauge whether specialized instructions could accelerate these.

“So if I take these instructions out, I need a simulator that can simulate this [new] instruction. If I put it in here and take the other instructions out, the first question is, are the answers correct? Then the other thing would be: does it run enough to make it worthwhile?”

Another important factor is whether the compiler could recognize the sequences of code in the application and replace it with the custom instruction to boost performance, Potler said.

“You also see that extensions to the instruction set architecture will provide performance benefits to current and future HPC applications, whatever they may be,” he added.

However, Buss warned that even if there is a great deal of interest in RISC-V, it will take time to get there for users at HPC sites.

“There’s nothing stopping RISC-V, but it takes time to develop the performance and power to the required level,” he said, pointing out that it took the Arm architecture over a decade to get to the point where it could be competitive in this space.

There was also the setback of Intel pulling its support for the RISC-V architecture last month, after earlier becoming a premier member of RISC-V International, the governing body for the standard, and pledging to offer validation services for RISC-V IP cores optimized for manufacturing in Intel fabs.®

Source link

Continue Reading

Technology

How to improve the consumer offboarding experience

Avatar

Published

on

We often think about the start and middle points of the consumer experience but how often do we think about the end? In this article, adapted from his book Endineering, Joe Macleod, veteran product developer, explains how businesses can productively and meaningfully disengage with consumers.

Businesses often fail to engage in purposeful and proactive methods to end consumer product or service lifecycles. The consequence is a failed approach to endings that is damaging customer relationships, businesses and the environment.

What if the end isn’t all bad? What if there is actually much to be gained at the end? I’ve been working on endings in the consumer lifecycle for over a decade – researching, publishing books, speaking around the world at conferences, and working with some of the world’s biggest companies.

Here are some suggestions on how to achieve positive offboarding experiences for customers.

Consciously connected

The consumer experience should feel similar at both the beginning and the end of the consumer lifecycle.

Currently, many offboarding experiences are delivered without care or interest from the provider. Further still, offboarding is sometimes delivered by entirely different groups, for example municipal organisations such as waste management, or health and safety representatives.

The same narrative voice should offboard the consumer from the experience, with similar principles and tone of voice as when they were being onboarded.

Emotional triggers

The emotional richness delivered at onboarding helps consumers to engage. These feelings should be matched at offboarding, inspiring engagement and interest from all parties.

Being present as a brand both emotionally and actively is important at the end. Currently many brands seem to struggle with appearing authentic.

Emotional triggers should offer an opportunity for the consumer to reflect personally on the experience gained with the brand.

Endineering book cover which is an illustration of product lifecycles on a white background.

Endineering by Joe Macleod. Image: Joe Macleod

Measurable and actionable

Consumers should have a clear, measurable understanding of the impact of their consumption at offboarding. This information should be delivered in a way that enables the consumer to reflect upon their involvement in consumerism and be empowered to do something about it.

Consumers should have a clear, measurable understanding of the impact of their consumption at offboarding.

Businesses and governments around the world need to build and agree upon common measuring systems that are easily understood by the consumer.

This would establish a shared language for the consumer and the provider to communicate about the status of lingering assets, whether these are digital, service or physical product endings.

Identify and bond consumer and provider

Society needs to attach personal identity to consumerism. Consumers should be recognised as perpetrators of their past consumer activity.

Currently, the physical fallout of consumption is too easily relinquished, shipped overseas or left in the atmosphere for the most vulnerable in the world and future generations to grapple with.

However, the consumer shouldn’t be abandoned to deal with this responsibility alone. It should be shared with the provider, tied to the neutralising of assets.

Businesses need to move beyond relationships limited to a ‘good usage experience’ and start to be proud partners with consumers working towards a healthier conclusion.

Neutralising the negative consequences of consumption

Following on from the previous point, neutralising the assets of consumption should be the joint responsibility of both consumer and provider. People understand how some products, vegetable matter for example, are neutralised through organic decay. Other assets, like recycled plastics, appear to have smooth, accessible routes to offboarding courtesy of municipal recycling bins and collections.

But it’s what happens afterwards that is less visible. Plastic often gets shipped to vulnerable countries where people who are unprotected by safety laws process the material. Although the plastic material might eventually be neutralised, the consequences have knock-on effects.

Businesses, consumers and wider society need to see the issue of neutralising assets as an integral consumer experience.

For example, one simple improvement would be changing what is communicated at the end of product life. Rather than saying a product is ‘recyclable’, provide details such as, ‘This product is dismantled by x method, then gets recycled by x process, at this place in x country. This process is completed within this amount of time and costs this amount of carbon, which is then off-set’.

Timely and attentive

Businesses need to intervene at the end of the lifecycle with an active and attentive attitude. If the consumer experience is left to linger on beyond a planned ending, the assets become outdated, obsolete and risk falling out of control into the wider environment. This has become normal in recent decades, thus promoting indifference about unused products, accounts and subscriptions.

Businesses should redefine timeframes and styles of engagement with the consumer. In the short term, they will need to engage actively with the consumer to put an end to unused assets that linger in the physical, digital and service landscapes. This will seem counterintuitive to a business culture that has, in the past, benefitted from overlooking endings. But, in the long term, businesses that get this right will benefit from deeper, more loyal partnerships based on trusted re-engagement over years.

Strategic approaches will become more sophisticated, not only with regard to the consumer experience and long-term impact, but also as a means of collaboration to improve consumerism.

By Joe Macleod

Joe Macleod has experience in product development across various industries including leading e-communications and digital companies. Now he trains business influencers, policy makers, designers, product developers and individuals across diverse industries about the need for ‘good endings’ and how to achieve them. His book, Endineering: Designing consumption lifecycles that end as well as they begin, is available from online booksellers and www.andend.co.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!