Connect with us

Technology

Is this by Rothko or a robot? We ask the experts to tell the difference between human and AI art | Art

Avatar

Published

on

The year 2022 was when AI-generated images went viral. Online, you may have come across very realistic yet suspiciously improbable images of, say, an astronaut riding a horse through space or an avocado doubling as an armchair.

Numerous new generators – including Dall-E, Midjourney and Stable Diffusion – offer anyone with an internet connection the chance to conjure up their own strange apparition, simply by typing in a “prompt” for the AI. (For example, “astronaut astride horse on Mars”. Or, for this article, “Mark Rothko Abstract Expressionist oil painting” – yes, the image above isn’t a real Rothko.) The possibilities have been endless, the opportunity for meme-making infinite.

It should not be surprising that a great many artists who have spent a lifetime honing their skills are a little put out by this latest disruption. Are companies going to keep hiring designers when they can produce prototypes themselves for free? Will budgets stretch to include animators if their hand can be imitated from a simple text description? Advocates of AI have insisted that creatives should have nothing to worry about and can adapt their process to incorporate or work around technological advances, much like the modernists did with the invention of photography.

But if those historical greats were alive and working today, would they also be watching their backs? And could a computer ever hope to reproduce the emotional depth that gives great art its charm and meaning?

To find out, we set a challenge for three art experts: Bendor Grosvenor, art historian and presenter of the BBC’s Britain’s Lost Masterpieces; JJ Charlesworth, art critic and editor of ArtReview; and Pilar Ordovas, founder of the Mayfair gallery Ordovas. Each was invited to look at pairs of artworks of a similar style and period over Zoom to see if they could tell which was generated by a machine. All three admitted to finding it tougher than expected …

Nineteenth-century landscape

Homer Watson left An image generated using Dall-E right
(Left) Homer Watson, Down in the Laurentides (1882). (Right) An image generated using Dall-E with the prompt “Landscape oil painting Constable Claude Corot”. Composite: Homer Watson/ National Gallery of Canada; Image generated by Jo Lawson-Tancred and Philip Booth

Bendor Grosvenor “When authenticating a painting, composition is usually the last thing I would look at, after brushstrokes and condition. The one on the left looks like New Zealand, with the cows a bit plonked in and the grass not particularly well painted – but I quite like the way the light falls on the hills. There’s something about the picture on the right that looks a bit too good to be true. It’s got the bright, contrasty clouds of a Constable and the winding river reminds me of the French Barbizon school. If you asked a computer to make a Constable, that’s probably what it would come up with.”

Verdict: correct “I think the AI image is quite impressive, actually. It’s like a blend between a Corot and a Constable. I can’t even draw a smiley face so take my artistic input with some scepticism, but I would say it needs a figure or a little boat to give it a focal point.”

JJ Charlesworth “Landscape can mean a lot of things from a lot of places, and there’s also the matter of whether it’s good. On the painting on the right, there’s something a bit confused at the edges and I’m not sure where the river goes … but then some painters wouldn’t have been too bothered about that. The left one seems to recall the American grand landscape painters. The foliage is weird at the front but the mountains have a humidity haze, there’s the cows and a little ship puttering away. My hunch is that the left one is real, by a very conventional painter who understands the codes of the genre.”

Verdict: correct “The modernist artists privileged compositional coherence with a degree of lyricism. It’s easy for critics to detect when someone is doing it badly, but the machine doesn’t notice. That tree in the middle is clumsy and I don’t know whether a painter interested in how to put together a picture would have done it.”

Pilar Ordovas “In real life, I would always look at the surface and the application of paint and would never judge an artwork from an image on Zoom. The one on the left doesn’t feel real to me but, then again, I’m sure there is a landscape that looks like this somewhere in the world. With the one on the right, I can feel the water, the trees and the air, whereas the above painting just feels flat to me and pixelated, so I think it’s fake.”

Verdict: wrong “I wouldn’t normally give judgment on a painting without viewing it in real life. I look at the right-hand picture and think of Corot and a number of other artists, so that’s what it’s done, right?”

Abstract expressionism

Gary Wragg, Chien 1, left, An image generated using Dall-E, right
(Left) Gary Wragg, Chien 1 (1983). (Right) An image generated using Dall-E with the prompt “abstract oil painting by Tapies Dubuffet”. Composite: Gary Wragg; Image generated by Jo Lawson-Tancred and Philip Booth

BG “Even if these were two genuine works I wouldn’t know where to begin. I can’t think of anything interesting to say about either of them. One is probably an extremely famous thing that I should know about but I’m so rubbish on abstract art. I would go for the AI one being the squiggly one on the left because I feel there’s something a little bit digitised about some of those scratch marks to the right. The picture on the right feels like the product of … oh, I don’t know! Yes, I’ll stick with the one on the left being the AI.”

Verdict: wrong “Well, the one on the right is better than the one on the left in my opinion. I don’t really have that kind of thing on my wall but if you offered me the choice, I would actually go for the AI one. There’s something slightly pleasing about the colours and shapes, a bit like a Ben Nicholson.”

JC “Abstraction is obviously very anti-conventional and isn’t anchored in figuration, so you have to assess each one on their own merits. There’s actually more diversity in the left one, there’s a deployment of these scratch marks that is quite complicated … Put it this way, I find myself more drawn to it. The elements are speaking to each other more so it seems to have a motive. The right hand picture is pleasant enough but there is less structuring principle so, if it were by a human, I would struggle to care much about it.”

Verdict: correct “With this genre it’s very hard and the idea of cohering logic is important. Most abstract paintings come from a debate about what’s necessary in the mind of the artist, rather than what’s arbitrary. The one on the left seemed more subtle, but could be simulated from having seen too many Cy Twomblys.”

PO “I’m not sure about the shapes and the lines in the image on the left, but it does make me think of very early Pollock, though less colourful. The one on the right could relate to many early works from some of the abstract expressionists or, perhaps, Tancredi and certain Italian artists from the 1950s and 60s. I’m sure the AI is looking at these existing works in order to create something based on them but I would still say the one on the right is real.”

Verdict: wrong “When it’s not by a particular artist you know very well, it’s much harder to determine what feels wrong. With a specific artist you look at how they worked at a particular time, their colours, their compositions and what the feel of it should be. If it could be any artist, it’s a bit random.”

Dutch still-life

Ambrosius Bosschaert, Still Life leftimage generated using Stable Diffusion right
(Left) Ambrosius Bosschaert, Still Life with Peaches.
(Right)
An image generated using Stable Diffusion with the prompt “Dutch old master still life flowers in vase on table dark black Bosschaert”.
Composite: Ambrosius Bosschaert/National Gallery Prague; Image generated by Jo Lawson-Tancred and Philip Booth

BG “The plate of apples on the left looks like an Adriaen Coorte – quite sophisticated and I like the reflection on the plate. The flowers look quite simplistic and the petals don’t quite work but I think you’re playing a bit of a trick on me here … because you can get still lifes from the period that look quite clunky and that’s part of their appeal. So it’s very tricky! The image on the right is full of what you would want to describe as deficiencies: the tablecloth looks like a bit of folded-up cardboard. However, I think it might be genuine because I can see cracks on the surface that I don’t think the AI would put there. If it has, it’s very, very clever.”

Verdict: wrong “Really? Wow, I didn’t know it could do that. Well that’s very good.”

JC “The right one looks familiar. You get the over-stylised flowers in quite a number of still lifes. The bland apples, or pears, whatever they are, on the left … I think there’s a rather clumsy idea of which side you put the red on and I find them rather lifeless and dull. There is too much attention on the reflection on the plate, the colouration seems wrong and I’m not sure why the leaves are so decayed. It could be an artist who nobody bought very much of because he was depressing. On a snap judgment, I’m inclined to say that one is fake.”

Verdict: wrong “Well, there are a few alarm bells. There’s a slightly confused moment where that red flower on the left is curling off a stem that seems to connect with the blue one. These paintings were aspiring to realism before the existence of photographic realism, so there is often a peculiarity in pre-photographic painting.”

PO “The picture on the right looks more like a Dutch still life for me with the flowers. The colours are not quite right but it could be a terrible reproduction. The one on the left looks more Spanish than Dutch to me. The imperfect leaves on the pear are really good, as is the shadow on the plate, so I think that one is real. Still lifes are all about symbolism and the fragility of life, with the wilted leaves sort of eaten up. It relates more to what the artist would have been interested in then.”

Verdict: correct “The work on the right just feels empty of all the meaning that you would expect to see in this period. Still life is not just a beautiful vase of flowers or fruits, it’s actually laden with feeling. With abstract art it’s much more random, so with that it may be harder to make a judgment.”

Impressionist scene

Édouard Manet left, image generated using Stable Diffusion, right
(Left) Édouard Manet, A Cafe on the Place du Theatre-Francais. (Right) An image generated using Stable Diffusion with the prompt “Impressionist street Paris Manet Pissarro Caillebotte distant figures dappled light oil painting”. Composite: Édouard Manet/The Burrell Collection; Image generated by Jo Lawson-Tancred and Philip Booth

BG The image on the left, from what I can see, is quite spontaneous and creative. I can see sketchiness and the canvas is showing through, whereas the one on the right looks a bit glossy. I’m a little suspicious about the cobbles, they look off as does the tree – or the lamp-post, is it a lamp-post? – on the far right … I want to say the image on the left is genuine but who is the woman talking to? It looks like a splodge. I’ll go with it being human made just because it feels a little bit more rough and ready.

Verdict: correct Well I should know what a Manet looks like, but at least I figured out it was by a human! The picture on the right is almost a little bit too good to be true, like the computer’s trying a bit too hard to do a Pissarro or something like that.

JC The one on the right is too orderly, it strikes me that the depth is a little bit obvious and the trees are too repetitive. It feels like an image that understands 3D modelling rather than looking. The one on the left has all the curious incoherence of impressionist preoccupations – blurred distance, indifference … These are human values that have a certain pathos to them and I just don’t get that in the other one. Typically, the street scene was about time, place and boredom, but this seems to me to be prosaic, there’s no attention to anything and quite a banal mood.

Verdict: correct Creating a sense of attention is not simply a matter of understanding figures and orchestrating them formally, there’s also these quite intangible issues of mood, place and emotion. That doesn’t necessarily mean it couldn’t have been an image generated by an AI trained on Manet.

PO With impressionism, the surface would tell you everything. However, in the work on the right the colours look off to me and the whites are really, really white. You can hardly see the faces in the foreground. Sometimes with avant garde art you do get odd colours but they make sense and have emotion. This has no depth and the figures look a bit floaty. The composition on the left is very different. It looks like it has a pastelly finish, which may be the AI imitating pastel but there is something that rings more true.

Verdict: correct It’s interesting how in the picture on the right, the two figures near the front are almost faceless but in the image on the left you see a face, so it feels human. I am surprised, I thought these comparisons were going to be more obvious but they weren’t at all in some cases.

Source link

Technology

Microsoft’s Activision Blizzard acquisition will harm UK gamers, says watchdog | Microsoft

Avatar

Published

on

The UK’s competition regulator has ruled that Microsoft’s $68.7bn (£59.6bn) deal to buy Activision Blizzard, the video game publisher behind hits including Call of Duty, will result in higher prices and less competition for UK gamers.

The Competition and Markets Authority (CMA), which launched an in-depth investigation in September after raising a host of concerns about the biggest takeover in tech history, said the deal would weaken the global rivalry between Microsoft’s Xbox and Sony’s PlayStation consoles.

“Our job is to make sure that UK gamers are not caught in the crossfire of global deals that, over time, could damage competition and result in higher prices, fewer choices, or less innovation,” said Martin Coleman, the chair of the independent panel of experts conducting the investigation. “We have provisionally found that this may be the case here.”

The CMA said possible remedies to address competition issues included selling or spinning off the business that makes Call of Duty, or the entire Activision arm of the combined Activision Blizzard.

However, the watchdog acknowledged that a spin-off into a standalone operation would mean the new business “may not have sufficient assets and resources to operate as an independent entity”.

While the CMA did not completely rule out measures short of a divestiture – for example a “behavioural remedy” such as an iron-clad licence to guarantee distribution of Call of Duty to Sony – it said a structural solution such as a partial sale, spin-off or completely blocking the deal was its preferred option.

“We are of the initial view that any behavioural remedy in this case is likely to present material effectiveness risks,” it said. “At this stage, the CMA considers that certain divestitures and/or prohibition are, in principle, feasible remedies in this case.”

The CMA said there was a risk under the deal that Microsoft could try to make Call of Duty, Activision’s flagship game and one of the most popular and profitable global franchises of all time, exclusively available to Xbox console owners.

Last year, Microsoft attempted to allay competition concerns saying it would offer its rival Sony a 10-year licence to ensure the title stayed on its Playstation consoles.

However, following Microsoft’s $7.5bn acquisition of ZeniMax in 2020, the parent of studios behind games including The Elder Scrolls, Fallout and Doom, Microsoft moved to make some titles exclusive to its own devices.

The company had previously assured European regulators that it had no incentive to make such a move.

“Microsoft would find it commercially beneficial to make Activision’s games exclusive to its own consoles, or only available on PlayStation under materially worse conditions,” the CMA said. “This strategy, of buying gaming studios and making their content exclusive to Microsoft’s platforms, has been used by Microsoft following several previous acquisitions of games studios.”

The CMA said the end result could be that gamers would see “higher prices, reduced range, lower quality, and worse service in gaming consoles over time”.

skip past newsletter promotion

Microsoft said that it believed its 10-year guarantee to continue to offer Call of Duty to rivals on equal terms would be enough to allay competition concerns.

“We are committed to offering effective and easily enforceable solutions that address the CMA’s concerns,” said Rima Alaily, the corporate vice-president and deputy general counsel at Microsoft. “Our commitment to grant long-term 100% equal access to Call of Duty to Sony, Nintendo, Steam and others preserves the deal’s benefits to gamers and developers and increases competition in the market.”

The CMA’s ruling is of critical importance as it comes before the publication of official findings of investigations conducted by the European Commission and the US Federal Trade Commission, which in December launched legal action to block the deal.

“We hope between now and April we will be able to help the CMA better understand our industry,” said a spokesperson for Activision Blizzard. “To ensure they can achieve their stated mandate to promote an environment where people can be confident they are getting great choices and fair deals, where competitive, fair-dealing business can innovate and thrive, and where the whole UK economy can grow productively and sustainably.”

Microsoft’s all-cash offer for Activision Blizzard, which also publishes global hits such as World of Warcraft and Candy Crush, dwarfs its previous biggest deal, the $26bn takeover of LinkedIn in 2016.

The purchase would result in the Xbox maker becoming the world’s third-biggest gaming company by revenue behind China’s Tencent and Japan’s Sony, the maker of PlayStation games consoles. It is also the biggest deal in tech history, eclipsing the $67bn paid by Dell to buy the digital storage company EMC in 2015.

Source link

Continue Reading

Technology

Could RISC-V become a force in HPC? We talk to the experts • The Register

Avatar

Published

on

Analysis The RISC-V architecture looks set to become more prevalent in the high performance computing (HPC) sector, and could even become the dominant architecture, at least according to some technical experts in the field.

Meanwhile, the European High Performance Computing Joint Undertaking (EuroHPC JU) has just announced a project aimed at the development of HPC hardware and software based on RISC-V, with plans to deploy future exascale and post-exascale supercomputers based on this technology.

RISC-V has been around for at least a decade as an open source instruction set architecture (ISA), while actual silicon implementations of the ISA have been coming to market over the past several years.

Among the attractions of this approach are that the architecture is not only free to use, but can also be extended, meaning that application-specific functions can be added to a RISC-V CPU design, and accessed by adding custom instructions to the standard RISC-V set.

This latter could prove to be a driving factor for broader adoption of RISC-V in the HPC sector, according to Aaron Potler, Distinguished Engineer at Dell Technologies.

“There’s synergy and growing strength in the RISC-V community in HPC,” Potler said, “and so RISC-V really does have a very, very good chance to become more prevalent on HPC.”

Potler was speaking in a Dell HPC Community online event, outlining perspectives from Dell’s Office of the Chief Technology and Innovation Officer.

However, he conceded that to date, RISC-V has not really made much of a mark in the HPC sector, largely because it wasn’t initially designed with that purpose in mind, but that there is “some targeting now to HPC” because of the business model it represents.

He made a comparison of sorts with Linux, which like RISC-V, started off as a small project, but which grew and grew in popularity because of its open nature (it was also free to download and run, as Potler acknowledged).

“Nobody would have thought then that Linux would run on some high end computer. When in 1993, the TOP500 list came out, there was only one Linux system on it. Nowadays, all the systems on the TOP500 list run Linux. Every single one of them. It’s been that way for a few years now,” he said.

If Linux wasn’t initially targeting the HPC market, but was adopted for it because of its inherent advantages, perhaps the same could happen with RISC-V, if there are enough advantages, such as it being an open standard.

“If that’s what the industry wants, then the community is going to make it work, it’s gonna make it happen,” Potler said.

He also made a comparison with the Arm architecture, which eventually propelled Fujitsu’s Fugaku supercomputer to the number one slot in the TOP500 rankings, and which notably accomplished this by extending the instruction set to support the 512bit Scalable Vector Engine units in the A64FX processor.

“So why wouldn’t a RISC-V-based system be number one on the TOP500 someday?” he asked.

There has already been work done on RISC-V instructions and architecture extensions relating to HPC, Potler claimed, especially those for vector processing and floating point operations.

All of this means that RISC-V has potential, but could it really make headway in the HPC sector, which once boasted systems with a variety of processor architectures but is now dominated almost entirely by X86 and Arm?

“RISC-V does have the potential to become the architecture of choice for the HPC market,” said Omdia chief analyst Roy Illsley. “I think Intel is losing its control of the overall market and the HPC segment is becoming more specialized.”

Illsley pointed out that RISC-V’s open-source nature means that any chipmaker can produce RISC-V-based designs without paying royalties or licensing fees, and that is supported by many silicon makers as well as by open-source operating systems.

Manoj Sukumaran, Principal Analyst for Datacenter Compute & Networking at Omdia agreed, saying that the biggest advantage for RISC-V is that its non-proprietary architecture lines up well with the technology sovereignty goals of various countries. “HPC Capacity is a strategic advantage to any country and it is an inevitable part of a country’s scientific and economic progress. No country wants to be in a situation like China or Russia and this is fueling RISC-V adoption,” he claimed.

RISC-V is also a “very efficient and compelling instruction set architecture” and the provision to customize it for specific computing needs with additional instructions makes it agile as well, according to Sukumaran.

The drive for sovereignty, or at least greater self-reliance, could be one motive behind the call from the EuroHPC JU for a partnership framework to develop HPC hardware and software based on RISC-V as part of EU-wide ecosystem.

This is expected to be followed up by an ambitious plan of action for building and deploying exascale and post-exascale supercomputers based on this technology, according to the EuroHPC JU.

It stated in its announcement that the European Chips Act identified RISC-V as one of the next-generation technologies where investment should be directed in order to preserve and strengthen EU leadership in research and innovation. This will also reinforce the EU’s capacity for the design, manufacturing and packaging of advanced chips, and the ability to turn them into manufactured products.

High-performance RISC-V designs already exist from chip companies such as SiFive and Ventana, but these are typically either designs that a customer can take and have manufactured by a foundry company such as TSMC, or available as a chiplet that can be combined with others to build a custom system-on-chip (SoC) package, which is Ventana’s approach.

Creating a CPU design with custom instructions to accelerate specific functions would likely be beyond the resources of most HPC sites, but perhaps not a large user group or forum. However, a chiplet approach could de-risk the project somewhat, according to IDC Senior Research Director for Europe, Andrew Buss.

“Rather than trying to do a single massive CPU, you can assemble a SoC from chiplets, getting your CPU cores from somewhere and an I/O hub and other functions from elsewhere,” he said, although he added that this requires standardized interfaces to link the chiplets together.

But while RISC-V has potential, the software ecosystem is more important, according to Buss. “It doesn’t matter what the underlying microarchitecture is, so long as there is a sufficient software ecosystem of applications and tools to support it,” he said.

Potler agreed with this point, saying that “One of the most critical parts for HPC success is the software ecosystem. Because we’ve all worked on architectures where the software came in second, and it was a very frustrating time, right?”

Developer tools, especially compilers, need to be “solid, they need to scale, and they need to understand the ISA very well to generate good code,” he said.

This also plays a part in defining custom instructions, as these calls for a profiler or some performance analysis tools to identify time consuming sequences of code in the applications in use and gauge whether specialized instructions could accelerate these.

“So if I take these instructions out, I need a simulator that can simulate this [new] instruction. If I put it in here and take the other instructions out, the first question is, are the answers correct? Then the other thing would be: does it run enough to make it worthwhile?”

Another important factor is whether the compiler could recognize the sequences of code in the application and replace it with the custom instruction to boost performance, Potler said.

“You also see that extensions to the instruction set architecture will provide performance benefits to current and future HPC applications, whatever they may be,” he added.

However, Buss warned that even if there is a great deal of interest in RISC-V, it will take time to get there for users at HPC sites.

“There’s nothing stopping RISC-V, but it takes time to develop the performance and power to the required level,” he said, pointing out that it took the Arm architecture over a decade to get to the point where it could be competitive in this space.

There was also the setback of Intel pulling its support for the RISC-V architecture last month, after earlier becoming a premier member of RISC-V International, the governing body for the standard, and pledging to offer validation services for RISC-V IP cores optimized for manufacturing in Intel fabs.®

Source link

Continue Reading

Technology

How to improve the consumer offboarding experience

Avatar

Published

on

We often think about the start and middle points of the consumer experience but how often do we think about the end? In this article, adapted from his book Endineering, Joe Macleod, veteran product developer, explains how businesses can productively and meaningfully disengage with consumers.

Businesses often fail to engage in purposeful and proactive methods to end consumer product or service lifecycles. The consequence is a failed approach to endings that is damaging customer relationships, businesses and the environment.

What if the end isn’t all bad? What if there is actually much to be gained at the end? I’ve been working on endings in the consumer lifecycle for over a decade – researching, publishing books, speaking around the world at conferences, and working with some of the world’s biggest companies.

Here are some suggestions on how to achieve positive offboarding experiences for customers.

Consciously connected

The consumer experience should feel similar at both the beginning and the end of the consumer lifecycle.

Currently, many offboarding experiences are delivered without care or interest from the provider. Further still, offboarding is sometimes delivered by entirely different groups, for example municipal organisations such as waste management, or health and safety representatives.

The same narrative voice should offboard the consumer from the experience, with similar principles and tone of voice as when they were being onboarded.

Emotional triggers

The emotional richness delivered at onboarding helps consumers to engage. These feelings should be matched at offboarding, inspiring engagement and interest from all parties.

Being present as a brand both emotionally and actively is important at the end. Currently many brands seem to struggle with appearing authentic.

Emotional triggers should offer an opportunity for the consumer to reflect personally on the experience gained with the brand.

Endineering book cover which is an illustration of product lifecycles on a white background.

Endineering by Joe Macleod. Image: Joe Macleod

Measurable and actionable

Consumers should have a clear, measurable understanding of the impact of their consumption at offboarding. This information should be delivered in a way that enables the consumer to reflect upon their involvement in consumerism and be empowered to do something about it.

Consumers should have a clear, measurable understanding of the impact of their consumption at offboarding.

Businesses and governments around the world need to build and agree upon common measuring systems that are easily understood by the consumer.

This would establish a shared language for the consumer and the provider to communicate about the status of lingering assets, whether these are digital, service or physical product endings.

Identify and bond consumer and provider

Society needs to attach personal identity to consumerism. Consumers should be recognised as perpetrators of their past consumer activity.

Currently, the physical fallout of consumption is too easily relinquished, shipped overseas or left in the atmosphere for the most vulnerable in the world and future generations to grapple with.

However, the consumer shouldn’t be abandoned to deal with this responsibility alone. It should be shared with the provider, tied to the neutralising of assets.

Businesses need to move beyond relationships limited to a ‘good usage experience’ and start to be proud partners with consumers working towards a healthier conclusion.

Neutralising the negative consequences of consumption

Following on from the previous point, neutralising the assets of consumption should be the joint responsibility of both consumer and provider. People understand how some products, vegetable matter for example, are neutralised through organic decay. Other assets, like recycled plastics, appear to have smooth, accessible routes to offboarding courtesy of municipal recycling bins and collections.

But it’s what happens afterwards that is less visible. Plastic often gets shipped to vulnerable countries where people who are unprotected by safety laws process the material. Although the plastic material might eventually be neutralised, the consequences have knock-on effects.

Businesses, consumers and wider society need to see the issue of neutralising assets as an integral consumer experience.

For example, one simple improvement would be changing what is communicated at the end of product life. Rather than saying a product is ‘recyclable’, provide details such as, ‘This product is dismantled by x method, then gets recycled by x process, at this place in x country. This process is completed within this amount of time and costs this amount of carbon, which is then off-set’.

Timely and attentive

Businesses need to intervene at the end of the lifecycle with an active and attentive attitude. If the consumer experience is left to linger on beyond a planned ending, the assets become outdated, obsolete and risk falling out of control into the wider environment. This has become normal in recent decades, thus promoting indifference about unused products, accounts and subscriptions.

Businesses should redefine timeframes and styles of engagement with the consumer. In the short term, they will need to engage actively with the consumer to put an end to unused assets that linger in the physical, digital and service landscapes. This will seem counterintuitive to a business culture that has, in the past, benefitted from overlooking endings. But, in the long term, businesses that get this right will benefit from deeper, more loyal partnerships based on trusted re-engagement over years.

Strategic approaches will become more sophisticated, not only with regard to the consumer experience and long-term impact, but also as a means of collaboration to improve consumerism.

By Joe Macleod

Joe Macleod has experience in product development across various industries including leading e-communications and digital companies. Now he trains business influencers, policy makers, designers, product developers and individuals across diverse industries about the need for ‘good endings’ and how to achieve them. His book, Endineering: Designing consumption lifecycles that end as well as they begin, is available from online booksellers and www.andend.co.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!