Connect with us

Technology

TechScape: Enter the multiverse – the chat-room game made of AI art | Artificial intelligence (AI)

Avatar

Published

on

The Bureau of Multiversal Arbitration is an unusual workplace. Maude Fletcher’s alright, though she needs to learn how to turn off caps lock in the company chat. But trying to deal with Byron G Snodgrass is like handling an energetic poodle, and Phil is a bit stiff.

Sorry, that was unclear. Byron G Snodgrass is an energetic poodle. Phil is a plant. A peace lily, I think.

The three work as arbiters, managing a few hundred caseworkers as they carry out the work of the Bureau: scanning through the multiverse for inspiration, information and innovation. Although, if you ask me, the Bureau’s gone a little off-course recently. Is it really a good use of all that technology to set me to work finding the best meal in all of existence?

“A creature with a thousand eyes and a million limbs, cooked in the style of Duck à l’Orange”, generated by an AI artist.
‘A creature with a thousand eyes and a million limbs, cooked in the style of duck à l’orange’. Photograph: Caseworker dmm/Bureau of Multiversal Arbitration CC0 1.0

Let’s part the veil. The BMA is the setting, and title, of a … thing, created by game company Aconite, helmed by Nadya Lev and Star St.Germain. I say “thing” because it’s not clear how best to describe what the pair have made. Calling it a video game summons up all the wrong impressions, but it’s hardly an experience or a toy, either. A larp (live-action roleplay) might be closer if it was live action, but it’s not: BMA is played in a Discord channel, the gamer-focused chat app standing in for the Bureau’s internal slack. St.Germain calls it a “Discord game”, which works well enough.

The Multiversal Search Engine at the core of the game is actually a carefully managed version of the Stable Diffusion AI image generator. Players are given assignments – like finding that dessert – which they use as prompts for the image generator, competing with enough others to generate the best responses, with the winning creation, voted on by all players, being stuck on the virtual fridge for everyone to see – and, if you’re lucky, praised by Maude.

It’s one of the most exciting and innovative uses of AI image generation that I’ve seen, and that’s no accident. “A lot of people are villainising this tech,” said St.Germain when I called her this week. “And it is scary, it does incredible things: you type in something and all of a sudden you’ve got this image from another world.” But she was fascinated by the possibilities. “The way I think about it is that this world already exists – you just need to find the things within it.”

That’s the genesis of the game, reframing the hallucinatory aspects of AI creation as a feature, not a bug. Unless you want bugs, of course. Or something more outré still, maybe? Like one of the near-winners for the meal prompt: “A creature with a thousand eyes and a million limbs, cooked in the style of duck à l’orange”.

“Rococo-style fridge, detailed, in the kitchen”, generated by an AI artist
‘Rococo-style fridge, detailed, in the kitchen’. Photograph: Caseworker shevtsov/Bureau of Multiversal Arbitration/CC0 1.0

The game’s narrative also allows St.Germain and colleagues to gently push players away from some of the less savoury aspects of the technology. Trying to generate “real” objects from alternate realities means there is little motivation to strip-mine the creative works of other artists, while prompts are selected to avoid the possibility of generating the gore or explicit content that Stable Diffusion can also pump out (a further filter blocks objectionable words, just in case).

“We’ve done a lot of work in the fiction and curation sides of things to prevent some of those things from happening,” St.Germain says, “but also finding ways to lean into it occasionally – to release the pressure but with something that is maybe a little bit tamer than what some people can do. We have a scenario coming up that’s meant to be an insect confectionary thing. You’re making bug candies. Because we wanted to pick something that some players are gonna want to lean into the gruesomeness of. Giving players the opportunity to say, ‘I’m gonna make a gross thing.’”

“woman with antlers in noir hallway”, generated by an AI.
‘Woman with antlers in noir hallway’. Photograph: Caseworker daela/Bureau of Multiversal Arbitration/CC0 1.0

Surprisingly, running the Bureau is a full-time job for St.Germain. The Multiversal Search Engine itself is automated, but the non-player characters who turn a simple chatroom into a richly interactive experience – and ensure the players stay on-task and the community stays pleasant – are puppeted by her and her colleagues. “Everybody wants to focus in on, ‘What’s the tech going to do next?’ But the part of this that is the most important, that people are going to really lose sight of for a minute, is that what makes these tools work is the marriage with a human brain. The curation and narrative aspects of creating things, you need a vision to bring it all together. The place that this tech is going to go is when the tech can enable that human vision in a meaningful way.”

As a result, the Bureau is only operating for a month. The game will end this week: as a free experience that takes real labour to continue operating, it can’t run indefinitely. (There’s also the cost of the AI generation itself, although at around $1,000 for the month-long operation, it’s a comparatively small part of the pie.) It may come back in the future, but if you want to experience before then, the next few days are your last chance.

Maliciously harmful

A child using a laptop
Photograph: Peter Byrne/PA

The UK’s online safety bill is returning to parliament, under its fourth prime minister and seventh DCMS secretary since it was first proposed, back when it was the Online Harms White Paper. That many fingerprints on the bill has left it a monster piece of legislation, bundling the obsessions of every wing of the Tory party in at once.

That sort of triangulation, I’ve written before, has left the bill in a sort of shit Goldilocks zone: one where neither child protection groups nor free speech advocates think it’s a good bill. That either proves that it’s perfectly balanced, or that it’s bad.

It wouldn’t do to simply reintroduce Boris Johnson’s legislation, though, and so a new prime minister means a new version of the bill. On Friday news came that two new offences would be introduced to UK law. One, tackling “downblousing”, cleans up an accidental loophole in an earlier effort to ban “upskirting”. That law mentioned surreptitious photography of “genitals or buttocks”, and so accidentally left some kinds of voyeurism in the clear.

Another, taking aim at explicit “deepfakes”, is interesting on a deeper level. The plan is to outlaw the nonconsensual sharing of “manufactured intimate images”, targeted at images that have been generated using AI to show real people in explicit situations. But distinguishing between a deepfake and an illustration is surprisingly hard: is there a point at which a pencil drawing becomes realistic enough that someone could be sent to jail for it? Or is the act of using a computer to generate the image specifically part of the offence? We’ll find out when the text of the bill is released at some point in the next week.

On Monday evening there was another, more farcical, change. Bowing to pressure from the libertarian wing of the Conservative party, the offence of “harmful communications” has been dropped from the bill (although two similar offences, covering “false” and “threatening communications” have been retained). The clause had become a lightning-rod for criticism, with opponents arguing that it was “legislating for hurt feelings” and an attempt to ban “offensive speech”.

Why farcical? Because to remove the harmful communications offence, the government has also cancelled plans to strike off the two offences it was due to replace – parts of the Malicious Communications Act and Section 127 of the Communications Act, which are far broader than the ban on harmful communications. The harmful communications offence required a message to cause “serious distress”; the malicious communications act requires only “distress”, while the Communications Act is even softer, banning messages sent “for the purpose of causing annoyance, inconvenience or needless anxiety”.

The problem is that these offences, while horrendously broad, are also the only way to tackle very real abuse – and so if they aren’t being replaced with a similar, narrower offence, it could hinder attempts to seek justice for harrowing online harassment.

At the time of publication, it’s not yet clear whether the MPs who pushed for the abolition of the harmful communications offence have realised that their wish has been granted in the most censorious manner possible.

If this email caused you annoyance, inconvenience or needless anxiety, please be assured it wasn’t my intent.

If you want to read the complete version of the newsletter please subscribe to receive TechScape in your inbox every Tuesday.

Source link

Technology

Microsoft’s Activision Blizzard acquisition will harm UK gamers, says watchdog | Microsoft

Avatar

Published

on

The UK’s competition regulator has ruled that Microsoft’s $68.7bn (£59.6bn) deal to buy Activision Blizzard, the video game publisher behind hits including Call of Duty, will result in higher prices and less competition for UK gamers.

The Competition and Markets Authority (CMA), which launched an in-depth investigation in September after raising a host of concerns about the biggest takeover in tech history, said the deal would weaken the global rivalry between Microsoft’s Xbox and Sony’s PlayStation consoles.

“Our job is to make sure that UK gamers are not caught in the crossfire of global deals that, over time, could damage competition and result in higher prices, fewer choices, or less innovation,” said Martin Coleman, the chair of the independent panel of experts conducting the investigation. “We have provisionally found that this may be the case here.”

The CMA said possible remedies to address competition issues included selling or spinning off the business that makes Call of Duty, or the entire Activision arm of the combined Activision Blizzard.

However, the watchdog acknowledged that a spin-off into a standalone operation would mean the new business “may not have sufficient assets and resources to operate as an independent entity”.

While the CMA did not completely rule out measures short of a divestiture – for example a “behavioural remedy” such as an iron-clad licence to guarantee distribution of Call of Duty to Sony – it said a structural solution such as a partial sale, spin-off or completely blocking the deal was its preferred option.

“We are of the initial view that any behavioural remedy in this case is likely to present material effectiveness risks,” it said. “At this stage, the CMA considers that certain divestitures and/or prohibition are, in principle, feasible remedies in this case.”

The CMA said there was a risk under the deal that Microsoft could try to make Call of Duty, Activision’s flagship game and one of the most popular and profitable global franchises of all time, exclusively available to Xbox console owners.

Last year, Microsoft attempted to allay competition concerns saying it would offer its rival Sony a 10-year licence to ensure the title stayed on its Playstation consoles.

However, following Microsoft’s $7.5bn acquisition of ZeniMax in 2020, the parent of studios behind games including The Elder Scrolls, Fallout and Doom, Microsoft moved to make some titles exclusive to its own devices.

The company had previously assured European regulators that it had no incentive to make such a move.

“Microsoft would find it commercially beneficial to make Activision’s games exclusive to its own consoles, or only available on PlayStation under materially worse conditions,” the CMA said. “This strategy, of buying gaming studios and making their content exclusive to Microsoft’s platforms, has been used by Microsoft following several previous acquisitions of games studios.”

The CMA said the end result could be that gamers would see “higher prices, reduced range, lower quality, and worse service in gaming consoles over time”.

skip past newsletter promotion

Microsoft said that it believed its 10-year guarantee to continue to offer Call of Duty to rivals on equal terms would be enough to allay competition concerns.

“We are committed to offering effective and easily enforceable solutions that address the CMA’s concerns,” said Rima Alaily, the corporate vice-president and deputy general counsel at Microsoft. “Our commitment to grant long-term 100% equal access to Call of Duty to Sony, Nintendo, Steam and others preserves the deal’s benefits to gamers and developers and increases competition in the market.”

The CMA’s ruling is of critical importance as it comes before the publication of official findings of investigations conducted by the European Commission and the US Federal Trade Commission, which in December launched legal action to block the deal.

“We hope between now and April we will be able to help the CMA better understand our industry,” said a spokesperson for Activision Blizzard. “To ensure they can achieve their stated mandate to promote an environment where people can be confident they are getting great choices and fair deals, where competitive, fair-dealing business can innovate and thrive, and where the whole UK economy can grow productively and sustainably.”

Microsoft’s all-cash offer for Activision Blizzard, which also publishes global hits such as World of Warcraft and Candy Crush, dwarfs its previous biggest deal, the $26bn takeover of LinkedIn in 2016.

The purchase would result in the Xbox maker becoming the world’s third-biggest gaming company by revenue behind China’s Tencent and Japan’s Sony, the maker of PlayStation games consoles. It is also the biggest deal in tech history, eclipsing the $67bn paid by Dell to buy the digital storage company EMC in 2015.

Source link

Continue Reading

Technology

Could RISC-V become a force in HPC? We talk to the experts • The Register

Avatar

Published

on

Analysis The RISC-V architecture looks set to become more prevalent in the high performance computing (HPC) sector, and could even become the dominant architecture, at least according to some technical experts in the field.

Meanwhile, the European High Performance Computing Joint Undertaking (EuroHPC JU) has just announced a project aimed at the development of HPC hardware and software based on RISC-V, with plans to deploy future exascale and post-exascale supercomputers based on this technology.

RISC-V has been around for at least a decade as an open source instruction set architecture (ISA), while actual silicon implementations of the ISA have been coming to market over the past several years.

Among the attractions of this approach are that the architecture is not only free to use, but can also be extended, meaning that application-specific functions can be added to a RISC-V CPU design, and accessed by adding custom instructions to the standard RISC-V set.

This latter could prove to be a driving factor for broader adoption of RISC-V in the HPC sector, according to Aaron Potler, Distinguished Engineer at Dell Technologies.

“There’s synergy and growing strength in the RISC-V community in HPC,” Potler said, “and so RISC-V really does have a very, very good chance to become more prevalent on HPC.”

Potler was speaking in a Dell HPC Community online event, outlining perspectives from Dell’s Office of the Chief Technology and Innovation Officer.

However, he conceded that to date, RISC-V has not really made much of a mark in the HPC sector, largely because it wasn’t initially designed with that purpose in mind, but that there is “some targeting now to HPC” because of the business model it represents.

He made a comparison of sorts with Linux, which like RISC-V, started off as a small project, but which grew and grew in popularity because of its open nature (it was also free to download and run, as Potler acknowledged).

“Nobody would have thought then that Linux would run on some high end computer. When in 1993, the TOP500 list came out, there was only one Linux system on it. Nowadays, all the systems on the TOP500 list run Linux. Every single one of them. It’s been that way for a few years now,” he said.

If Linux wasn’t initially targeting the HPC market, but was adopted for it because of its inherent advantages, perhaps the same could happen with RISC-V, if there are enough advantages, such as it being an open standard.

“If that’s what the industry wants, then the community is going to make it work, it’s gonna make it happen,” Potler said.

He also made a comparison with the Arm architecture, which eventually propelled Fujitsu’s Fugaku supercomputer to the number one slot in the TOP500 rankings, and which notably accomplished this by extending the instruction set to support the 512bit Scalable Vector Engine units in the A64FX processor.

“So why wouldn’t a RISC-V-based system be number one on the TOP500 someday?” he asked.

There has already been work done on RISC-V instructions and architecture extensions relating to HPC, Potler claimed, especially those for vector processing and floating point operations.

All of this means that RISC-V has potential, but could it really make headway in the HPC sector, which once boasted systems with a variety of processor architectures but is now dominated almost entirely by X86 and Arm?

“RISC-V does have the potential to become the architecture of choice for the HPC market,” said Omdia chief analyst Roy Illsley. “I think Intel is losing its control of the overall market and the HPC segment is becoming more specialized.”

Illsley pointed out that RISC-V’s open-source nature means that any chipmaker can produce RISC-V-based designs without paying royalties or licensing fees, and that is supported by many silicon makers as well as by open-source operating systems.

Manoj Sukumaran, Principal Analyst for Datacenter Compute & Networking at Omdia agreed, saying that the biggest advantage for RISC-V is that its non-proprietary architecture lines up well with the technology sovereignty goals of various countries. “HPC Capacity is a strategic advantage to any country and it is an inevitable part of a country’s scientific and economic progress. No country wants to be in a situation like China or Russia and this is fueling RISC-V adoption,” he claimed.

RISC-V is also a “very efficient and compelling instruction set architecture” and the provision to customize it for specific computing needs with additional instructions makes it agile as well, according to Sukumaran.

The drive for sovereignty, or at least greater self-reliance, could be one motive behind the call from the EuroHPC JU for a partnership framework to develop HPC hardware and software based on RISC-V as part of EU-wide ecosystem.

This is expected to be followed up by an ambitious plan of action for building and deploying exascale and post-exascale supercomputers based on this technology, according to the EuroHPC JU.

It stated in its announcement that the European Chips Act identified RISC-V as one of the next-generation technologies where investment should be directed in order to preserve and strengthen EU leadership in research and innovation. This will also reinforce the EU’s capacity for the design, manufacturing and packaging of advanced chips, and the ability to turn them into manufactured products.

High-performance RISC-V designs already exist from chip companies such as SiFive and Ventana, but these are typically either designs that a customer can take and have manufactured by a foundry company such as TSMC, or available as a chiplet that can be combined with others to build a custom system-on-chip (SoC) package, which is Ventana’s approach.

Creating a CPU design with custom instructions to accelerate specific functions would likely be beyond the resources of most HPC sites, but perhaps not a large user group or forum. However, a chiplet approach could de-risk the project somewhat, according to IDC Senior Research Director for Europe, Andrew Buss.

“Rather than trying to do a single massive CPU, you can assemble a SoC from chiplets, getting your CPU cores from somewhere and an I/O hub and other functions from elsewhere,” he said, although he added that this requires standardized interfaces to link the chiplets together.

But while RISC-V has potential, the software ecosystem is more important, according to Buss. “It doesn’t matter what the underlying microarchitecture is, so long as there is a sufficient software ecosystem of applications and tools to support it,” he said.

Potler agreed with this point, saying that “One of the most critical parts for HPC success is the software ecosystem. Because we’ve all worked on architectures where the software came in second, and it was a very frustrating time, right?”

Developer tools, especially compilers, need to be “solid, they need to scale, and they need to understand the ISA very well to generate good code,” he said.

This also plays a part in defining custom instructions, as these calls for a profiler or some performance analysis tools to identify time consuming sequences of code in the applications in use and gauge whether specialized instructions could accelerate these.

“So if I take these instructions out, I need a simulator that can simulate this [new] instruction. If I put it in here and take the other instructions out, the first question is, are the answers correct? Then the other thing would be: does it run enough to make it worthwhile?”

Another important factor is whether the compiler could recognize the sequences of code in the application and replace it with the custom instruction to boost performance, Potler said.

“You also see that extensions to the instruction set architecture will provide performance benefits to current and future HPC applications, whatever they may be,” he added.

However, Buss warned that even if there is a great deal of interest in RISC-V, it will take time to get there for users at HPC sites.

“There’s nothing stopping RISC-V, but it takes time to develop the performance and power to the required level,” he said, pointing out that it took the Arm architecture over a decade to get to the point where it could be competitive in this space.

There was also the setback of Intel pulling its support for the RISC-V architecture last month, after earlier becoming a premier member of RISC-V International, the governing body for the standard, and pledging to offer validation services for RISC-V IP cores optimized for manufacturing in Intel fabs.®

Source link

Continue Reading

Technology

How to improve the consumer offboarding experience

Avatar

Published

on

We often think about the start and middle points of the consumer experience but how often do we think about the end? In this article, adapted from his book Endineering, Joe Macleod, veteran product developer, explains how businesses can productively and meaningfully disengage with consumers.

Businesses often fail to engage in purposeful and proactive methods to end consumer product or service lifecycles. The consequence is a failed approach to endings that is damaging customer relationships, businesses and the environment.

What if the end isn’t all bad? What if there is actually much to be gained at the end? I’ve been working on endings in the consumer lifecycle for over a decade – researching, publishing books, speaking around the world at conferences, and working with some of the world’s biggest companies.

Here are some suggestions on how to achieve positive offboarding experiences for customers.

Consciously connected

The consumer experience should feel similar at both the beginning and the end of the consumer lifecycle.

Currently, many offboarding experiences are delivered without care or interest from the provider. Further still, offboarding is sometimes delivered by entirely different groups, for example municipal organisations such as waste management, or health and safety representatives.

The same narrative voice should offboard the consumer from the experience, with similar principles and tone of voice as when they were being onboarded.

Emotional triggers

The emotional richness delivered at onboarding helps consumers to engage. These feelings should be matched at offboarding, inspiring engagement and interest from all parties.

Being present as a brand both emotionally and actively is important at the end. Currently many brands seem to struggle with appearing authentic.

Emotional triggers should offer an opportunity for the consumer to reflect personally on the experience gained with the brand.

Endineering book cover which is an illustration of product lifecycles on a white background.

Endineering by Joe Macleod. Image: Joe Macleod

Measurable and actionable

Consumers should have a clear, measurable understanding of the impact of their consumption at offboarding. This information should be delivered in a way that enables the consumer to reflect upon their involvement in consumerism and be empowered to do something about it.

Consumers should have a clear, measurable understanding of the impact of their consumption at offboarding.

Businesses and governments around the world need to build and agree upon common measuring systems that are easily understood by the consumer.

This would establish a shared language for the consumer and the provider to communicate about the status of lingering assets, whether these are digital, service or physical product endings.

Identify and bond consumer and provider

Society needs to attach personal identity to consumerism. Consumers should be recognised as perpetrators of their past consumer activity.

Currently, the physical fallout of consumption is too easily relinquished, shipped overseas or left in the atmosphere for the most vulnerable in the world and future generations to grapple with.

However, the consumer shouldn’t be abandoned to deal with this responsibility alone. It should be shared with the provider, tied to the neutralising of assets.

Businesses need to move beyond relationships limited to a ‘good usage experience’ and start to be proud partners with consumers working towards a healthier conclusion.

Neutralising the negative consequences of consumption

Following on from the previous point, neutralising the assets of consumption should be the joint responsibility of both consumer and provider. People understand how some products, vegetable matter for example, are neutralised through organic decay. Other assets, like recycled plastics, appear to have smooth, accessible routes to offboarding courtesy of municipal recycling bins and collections.

But it’s what happens afterwards that is less visible. Plastic often gets shipped to vulnerable countries where people who are unprotected by safety laws process the material. Although the plastic material might eventually be neutralised, the consequences have knock-on effects.

Businesses, consumers and wider society need to see the issue of neutralising assets as an integral consumer experience.

For example, one simple improvement would be changing what is communicated at the end of product life. Rather than saying a product is ‘recyclable’, provide details such as, ‘This product is dismantled by x method, then gets recycled by x process, at this place in x country. This process is completed within this amount of time and costs this amount of carbon, which is then off-set’.

Timely and attentive

Businesses need to intervene at the end of the lifecycle with an active and attentive attitude. If the consumer experience is left to linger on beyond a planned ending, the assets become outdated, obsolete and risk falling out of control into the wider environment. This has become normal in recent decades, thus promoting indifference about unused products, accounts and subscriptions.

Businesses should redefine timeframes and styles of engagement with the consumer. In the short term, they will need to engage actively with the consumer to put an end to unused assets that linger in the physical, digital and service landscapes. This will seem counterintuitive to a business culture that has, in the past, benefitted from overlooking endings. But, in the long term, businesses that get this right will benefit from deeper, more loyal partnerships based on trusted re-engagement over years.

Strategic approaches will become more sophisticated, not only with regard to the consumer experience and long-term impact, but also as a means of collaboration to improve consumerism.

By Joe Macleod

Joe Macleod has experience in product development across various industries including leading e-communications and digital companies. Now he trains business influencers, policy makers, designers, product developers and individuals across diverse industries about the need for ‘good endings’ and how to achieve them. His book, Endineering: Designing consumption lifecycles that end as well as they begin, is available from online booksellers and www.andend.co.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!