Connect with us

Technology

Eco-friendly, lab-grown coffee is on the way, but it comes with a catch | Environment

Voice Of EU

Published

on

Heiko Rischer isn’t quite sure how to describe the taste of lab-grown coffee. This summer he sampled one of the first batches in the world produced from cell cultures rather than coffee beans.

“To describe it is difficult but, for me, it was in between a coffee and a black tea,” said Rischer, head of plant biotechnology at the VTT Technical Research Centre of Finland, which developed the coffee. “It depends really on the roasting grade, and this was a bit of a lighter roast, so it had a little bit more of a tea-like sensation.”

Rischer couldn’t swallow the coffee, as this cellular agriculture innovation is not yet approved for public consumption. Instead, he swirled the liquid around in his mouth and spit it out. He predicts that VTT’s lab-grown coffee could get regulatory approval in Europe and the US in about four years’ time, paving the way for a commercialized product that could have a much lower climate impact than conventional coffee.

The coffee industry is both a contributor to the climate crisis and very vulnerable to its effects. Rising demand for coffee has been linked to deforestation in developing nations, damaging biodiversity and releasing carbon emissions. At the same time, coffee producers are struggling with the impacts of more extreme weather, from frosts to droughts. It’s estimated that half of the land used to grow coffee could be unproductive by 2050 due to the climate crisis.

In response to the industry’s challenges, companies and scientists are trying to develop and commercialize coffee made without coffee beans.

VTT’s coffee is grown by floating cell cultures in bioreactors filled with a nutrient. The process requires no pesticides and has a much lower water footprint, said Rischer, and because the coffee can be produced in local markets, it cuts transport emissions. The company is working on a life cycle analysis of the process. “Once we have those figures, we will be able to show that the environmental impact will be much lower than what we have with conventional cultivation,” Rischer said.

American startups are also working on beanless coffee. In September, Seattle-based Atomo Coffee released what it called the world’s first “molecular coffee” in a one-day online pop-up, charging $5.99 a can.

The startup, which has raised $11.5m, makes its coffee by converting the compounds from plant waste into the same compounds contained in green coffee. Ingredients, including date seed extracts, chicory root, grape skin as well as caffeine, are roasted, ground and brewed. This method results in 93% lower carbon emissions and 94% less water use than conventional coffee production, as well as no deforestation, according to Atomo.

Tanks in Atomo’s factory. The food tech startup is making beanless coffee from plant waste.
Tanks in Atomo’s factory. The food tech startup is making beanless coffee from plant waste. Photograph: Atomo

“The industry has known about the deleterious effects of coffee farming for a long time, whether we’re talking deforestation or major water usage,” said Atomo’s co-founder Jarret Stopforth. “[Before starting Atomo] I was thinking to myself, ‘There’s got to be a better way to do this.’”

Atomo’s facility can produce about 1,000 servings of coffee a day. The goal is to increase that to 10,000 servings a day over the next 12 months, said Stopforth, and in two years to move into a facility that can produce 30m servings of coffee a year. Stopforth says that Atomo will start the initial phase of the new factory build within the next three months.

Alternative coffee companies like Atomo not only have the potential to help tackle the climate crisis but to benefit the industry generally, said Sylvain Charlebois, a professor in food distribution and policy at Dalhousie University in Halifax, Nova Scotia.

Take arabica beans, said Charlebois. “You need specific climatic patterns, and it’s much better if you’re more in control in a laboratory environment than just trying to rely on Mother Nature.” Technology can help stabilize production and make it more predictable, he said.

But it’s unclear how many people would be willing to give up conventional coffee for one of its beanless counterparts. A 2019 survey by Dalhousie University found that 72% of Canadians say they would not drink lab-grown coffee.

Maricel Saenz, founder and CEO of San Francisco-based Compound Foods said she was working to “reinvent” coffee and to show people why doing so matters. Compound Foods, which has secured $4.5m in seed funding, says it recreates coffee farm production in the lab. The startup uses microbes and fermentation technology to grow a variety of flavors and aromas, Saenz said.

Maricel Saenz, founder of Compound Foods, which makes beanless coffee.
Maricel Saenz, founder of Compound Foods, which makes beanless coffee. Photograph: Compound Foods

Preliminary results from a carbon life cycle analysis indicate that the company’s coffee produces a tenth of the greenhouse gas emissions and water use of traditional coffee, Saenz said. She plans to introduce her product by late 2022 and expects pricing to be similar to specialty coffees. “As we improve our processes, we aim to decrease our prices,” she said.

As the population grows and pressure increases on natural resources, Saenz said, “we need to be producing food in more efficient ways, using a lot of the biotechnology and fermentation tools that are now at our disposal.”

But Daniele Giovannucci, president and co-founder of the Committee on Sustainability Assessment, a consortium that focuses on agricultural sustainability, is concerned that scaling up lab-grown coffee could affect the livelihoods of the millions of workers in the traditional coffee industry, especially in countries such as Ethiopia where coffee is central to the economy. “What’s going to happen to all these people?” Giovannucci asked. “What are they going to do, because this is a key cash crop?”

There’s a risk, he said, that lab-grown coffee could create significant socio-economic problems that could drive even greater climate change effects. “It is not clear if, in the end, its net effect may worsen global sustainability, along with many millions of lives.”

Saenz, who is from Costa Rica, a coffee-exporting country, said, “I know many coffee producers, so it’s something that I definitely worry about.” But, she added, “the number one threat that coffee farmers have today is climate change” – whether that’s heat that disrupts ripening times, or unexpected frosts as Brazil experienced in the summer, which severely damaged crops.

Saenz said her company will collaborate with non-profits to support small coffee farmers transitioning to more sustainable agricultural practices, including providing training and crop insurance.

While lab-grown coffee shows real promise, said Charlebois, the politics should not be underestimated, especially as so many farmers depend on conventional methods of producing crops and many of them live in developing economies. “Scalability is not an issue for lab-grown coffee,” he said, “but regulations and general acceptance of the technology will be greater challenges.”

Source link

Technology

AI should be recognized as inventors in patent law • The Register

Voice Of EU

Published

on

In-brief Governments around the world should pass intellectual property laws that grant rights to AI systems, two academics at the University of New South Wales in Australia argued.

Alexandra George, and Toby Walsh, professors of law and AI, respectively, believe failing to recognize machines as inventors could have long-lasting impacts on economies and societies. 

“If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge,” they wrote in a comment article published in Nature. “Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions.”

Today’s laws pretty much only recognize humans as inventors with IP rights protecting them from patent infringement. Attempts to overturn the human-centric laws have failed. Stephen Thaler, a developer who insists AI invented his company’s products, has sued trademark offices in multiple countries, including the US and UK to no avail.

George and Walsh are siding with Thaler’s position. “Creating bespoke law and an international treaty will not be easy, but not creating them will be worse. AI is changing the way that science is done and inventions are made. We need fit-for-purpose IP law to ensure it serves the public good,” they wrote.

Dutch police generate deepfake of dead teenager in criminal case

A video clip with the face of a 13-year-old boy, who was shot dead outside a metro station in the Netherlands, swapped onto a body using AI technology was released by police.

Sedar Soares died in 2003. Officers have not managed to solve the case, and with Soares’ family’s permission, they have generated a deepfake of his image on a kid playing football in a field presumably to help jog anyone’s memory. The cops have reportedly received dozens of potential leads since, according to The Guardian. 

It’s the first time AI-generated images have been used to try and solve a criminal case, it seems. “We haven’t yet checked if these leads are usable,” said Lillian van Duijvenbode, a Rotterdam police spokesperson. 

You can watch the video here.

AI task force advises Congress to fund national computing infrastructure

America’s National Artificial Intelligence Research Resource (NAIRR) urged Congress to launch a “shared research cyberinfrastructure” to better provide academics with hardware and data resources for developing machine-learning tech.

The playing field of AI research is unequal. State-of-the-art models are often packed with billions of parameters; developers need access to lots of computer chips to train them. It’s why research at private companies seems to dominate, while academics at universities lag behind.

“We must ensure that everyone throughout the Nation has the ability to pursue cutting-edge AI research,” the NAIRR wrote in a report. “This growing resource divide has the potential to adversely skew our AI research ecosystem, and, in the process, threaten our nation’s ability to cultivate an AI research community and workforce that reflect America’s rich diversity — and harness AI in a manner that serves all Americans.”

If AI progress is driven by private companies, it could mean other types of research areas are left out and underdeveloped. “Growing and diversifying approaches to and applications of AI and opening up opportunities for progress across all scientific fields and disciplines, including in critical areas such as AI auditing, testing and evaluation, trustworthy AI, bias mitigation, and AI safety,” the task force argued. 

You can read the full report here [PDF].

Meta offers musculoskeletal research tech

Researchers at Meta AI released Myosuite, a set of musculoskeletal models and tasks to simulate biomechanical movement of limbs for a whole range of applications.

“The more intelligent an organism is, the more complex the motor behavior it can exhibit,” they said in a blog post. “So an important question to consider, then, is — what enables such complex decision-making and the motor control to execute those decisions? To explore this question, we’ve developed MyoSuite.”

Myosuite was built in collaboration with researchers at the University of Twente in the Netherlands, and aims to arm developers studying prosthetics and could help rehabilitate patients. There’s another potential useful application for Meta, however: building more realistic avatars that can move more naturally in the metaverse.

The models only simulate the movements of arms and hands so far. Tasks include using machine learning to simulate the manipulation of die or rotation of two balls. The application of Myosuite in Meta’s metaverse is a little ironic given that there’s no touching allowed there along with restrictions on hands to deter harassment. ®

Source link

Continue Reading

Technology

A day in the life of a metaverse specialist

Voice Of EU

Published

on

Unity’s Antonia Forster discusses her work using AR, VR and everything in between, and why ignoring imposter syndrome is particularly important in the world of emerging technology.

We’ve started hearing a lot about the metaverse and what it means for the future, including how it might affect recruitment and the working world.

But what is it like to actually work within this space? Antonia Forster is an extended reality (XR) technical specialist at video game software development company Unity Technologies, with several years of experience developing XR applications.

Future Human

In her role at Unity, she works across a variety of industries, from automotive to architecture, creating demos and delivering talks using XR, which encapsulates AR, VR and everything in between.

‘I watch a lot more YouTube tutorials than you might expect’
– ANTONIA FORSTER

If there is such a thing, can you describe a typical day in the job?

It’s challenging to describe a typical day because they vary so much!  I work completely remotely with flexible hours. Most of my team are based in the US while I’m in the UK. In order to manage the time difference, I usually start work around 11am and work until 7pm.

Most of my day is spent on developing content, whether that’s using Unity and C# to code a technical demo, creating video content to help onboard new starters with Unity’s tools, or writing a script for a webinar.

Before the pandemic, a role like mine would involve lots of travel and speaking at conferences. But unfortunately, that’s a little more challenging now.

We use a whole range of tools from organisational ones like Asana to manage our projects, to Slack and Google Docs to coordinate with each other, to Unity’s own technical tools to create content.

All of Unity’s XR tools fall under my remit, so I might be creating VR content one day and creating an AR mobile app the next. I also use Unity and C# to create my own projects outside of work. For example, I co-created the world’s first LGBTQ+ virtual reality museum, which has been officially selected for Tribeca Film Festival in June 2022 – during Pride!

What types of project do you work on?

At Unity, my role is to create content that helps people understand our tools and get excited about all the different things it enables them to do. For example, for one project I visited a real construction site and used one of Unity’s tools (VisualLive) to see the virtual model of the building model overlaid on top of the real physical construction.

This makes it very easy to see the difference between the plan and the actual reality, which is very important to avoid clashes and costly mistakes. For another project, I used VR and hand-tracking to demonstrate how someone could showcase a product (say, a car) inside a VR showroom and then interact with it using hand tracking and full-body tracking.

What skills do you use on a daily basis?

The most relevant skill for my role is the ability to break down a larger problem into small steps and then solving each step. That’s really all programming is! That and knowing the right terms to Google to find the solution and enough understanding to implement the solution, or continuing to search if you don’t understand that solution or it is not appropriate for your problem.

Despite my title, I don’t think of myself as highly ‘technical’. I’m an entirely self-taught software developer, and I’m a visual learner, so I watch a lot more YouTube tutorials than you might expect!

Another crucial skill is persistence because VR and AR are emerging and fast-moving technologies that are constantly changing. If I follow a tutorial or try a solution and it doesn’t work, I used to grapple with the feeling that maybe I’m not good enough.

In reality, this technology changes so often that if a tutorial is six months old, it might be out of date. Learning to be resilient and persistent and to ignore my feelings of imposter syndrome was the most important thing I’ve learned on my career journey. Your feelings are not facts, and imposter syndrome is extremely common in this industry.

What are the hardest parts of your working day?

One of the most difficult challenges of my working day is the isolation. I work remotely and many of my team are on a different time zone, so we’re not always able to chat. To overcome that, I prioritise social engagements outside of work.

When I’m extremely busy with my own projects – like the LGBTQ+ VR museum – I go to co-working spaces so that I can at least be around other people during working hours.

I also struggle with time blindness. I have ADHD and working remotely means that it’s easy to get absorbed in a task and forget to take breaks. I set alarms to snap myself out of my ‘trance’ at certain times, like lunchtime. I have to admit though, it doesn’t always work!

Do you have any productivity tips that help you through the day?

My main tip for productivity is to find what works for you, not what works for other people, or what others think should work for you.

For example, I am a night owl. So, starting my day a little later and working into the night, works well for me. It also means I can sync with my team in the US. I don’t find time to play video games, piano or meet up with my friends in the evening, so instead I arrange those things for the morning, which helps me persuade myself to get out of bed!

In the same way, when I was learning to code, people gave me advice like: ‘Break things and fix it, to see how it works’. But that produced a lot of anxiety for me and didn’t work well.

Instead, I learned with my own methods like writing songs, drawing cartoons and even physically printing and gluing code snippets into a notebook and writing the English translation underneath. Code after all, is a language, so I treated it the same way. Find what works for you, even if it’s not conventional!

How has this role changed as this sector has grown and evolved?

I began this role in 2020 and typically – before the pandemic – my job would have been described as a ‘technical evangelism’, which involves a lot of public speaking and travel to conferences.

Of course, that wasn’t really possible, so my role has evolved into creating content of different types – webinars online, videos, onboarding tutorials and technical demos for marketing and sales enablement.

While I really enjoy public speaking, the lack of travel has given me time to get deeply familiar with Unity’s XR tooling and sharpen my technical expertise. This technology is always changing so it’s really important to constantly learn and grow. Luckily, I have an insatiable curiosity and appetite for knowledge. I think all engineers do!

What do you enjoy most about the job?

I have two favourite things about this job. First, the autonomy. Since I have a deep understanding of the tools and our users/audience, I’m trusted to design and propose my own solutions that best meet the user needs.

Secondly, the technology itself. Being able to create VR or AR content is like sorcery! I can conjure anything from nothing. I can create entire worlds that I can step into based only on my imagination. And so can anybody that learns this skill – and it’s easier than you think! That has never stopped being magical and exciting to me, and I don’t think it ever will.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Technology

Will this fruit-picking robot transform agriculture? | Artificial intelligence (AI)

Voice Of EU

Published

on

Robots can do a lot. They build cars in factories. They sort goods in Amazon warehouses. Robotic dogs can, allegedly and a little creepily, make us safer by patrolling our streets. But there are some things robots still cannot do – things that sound quite basic in comparison. Like picking an apple from a tree.

“It’s a simple thing” for humans, says robotics researcher Joe Davidson. “You and I, we could close our eyes, reach into the tree. We could feel around, touch it, and say ‘hey, that’s an apple and the stem’s up here’. Pull, twist. We could do all that without even looking.”

Creating a robotic implement that can simply pick an apple and drop it into a bin without damaging it is a multimillion-dollar effort that has been decades in the making. Teams around the world have tried various approaches. Some have developed vacuum systems to suck fruit off trees. Davidson and his colleagues turned to the human hand for inspiration. They began their efforts by observing professional fruit pickers, and are now working to replicate their skilled movements with robotic fingers.

Their work could help to transform agriculture, turning fruit-picking – a backbreaking, time-consuming human task – into one that’s speedy and easier on farm workers.

These efforts have gained impetus recently as researchers point to the worsening conditions for farm workers amid the climate crisis, including extreme heat and wildfire smoke, and also a shortage of workers in the wake of the pandemic. The technology could lead to better working conditions and worker safety. But that outcome depends on how robots are deployed in fields, farm workers’ organizations say.

While robotic tools for agriculture have made big strides in recent years, those AI-based tools are mostly used for weeding, monitoring soil moisture and other field conditions, or for planting soybeans using remote-controlled tractors. “But when it actually comes to doing physical work like pruning trees or picking fruit, that’s still the realm of people today,” Davidson says.

Teaching robots to perform these tasks requires modernized versions of both the orchard and the apple.

Traditional orchards, with irregularly shaped trees and giant canopies, are too much of a challenge for algorithms to parse and process. Shifting sunbeams, fog and clouds add to computer vision’s challenges. Tangled, tall old trees are problematic even to human pickers, who end up spending much of their time hauling and positioning ladders, not picking fruit.

Now, many growers have transitioned to orchards where trees grow flat against trellises, their trunks and branches at right angles to create a “wall of fruit”, says Scott Jacky, owner of Red Roof Consulting, a group that helps optimize farm technologies. The thinner canopy also lets more sunlight in, encouraging fruits to form.

Since the 1990s, breeders have been working to develop apple varieties more resistant to sunburn – a side-effect of those sparser canopies – and less prone to bruising when dropped into bins. All these changes to the trees and the apples themselves make the job easier for robots (and for humans).

In orchards with trellised trees, human fruit pickers can cruise through rows of trees in pairs on slowly rolling platforms. One person crouches to reach low-hanging fruit, the other reaches for the higher branches. Professionals working this way take about two seconds to pick one apple.

The robot in Davidson’s lab, which is essentially a giant arm mounted on a rolling platform, takes about five seconds to make its moves. At the click of a key, the robotic arm reaches up for the fruit – actually a plastic apple made for testing purposes – with its three-fingered palm. Its fingers are covered in cushiony silicone “skin”, which conceals individual motors wired to tendons that drive its fingers. Thirty sensors under each fingertip track the pressure, speed, angle and other aspects of its grasp to help the robot complete its task.

Another keystroke and the fingers tighten, then twist, and the apple – successfully picked – rests in the robot’s palm.

The fruit-picking robot has picked an apple successfully about half of the 500 or so times it has tried so far. Still, the robotic arm has cracked some problems that posed hurdles to automation. For instance, it can avoid damaging both fruit and tree limbs in the harvesting process. Rapid improvements in computing make Davidson and others hopeful the robots will work on farms within the next five to 10 years.

The US government is placing significant bets on this technology. Last year alone, federal funding agencies granted $20m to support the AgAID institute, a new group that supports several researchers, including Davidson, in efforts to develop artificial intelligence-backed tools for agriculture.

Proponents of harvest automation say there will still be jobs for people, such as training and operating the robots. “There are going to be plenty of tasks where the robotic instruments and digital devices will necessarily have to work with humans,” said Ananth Kalyanaraman, professor at Washington State University and director of the AgAID institute. “That’s going to actually empower humans because it gives them new skillsets.”

For now, it’s unclear to many farm workers how the robots will affect their livelihood. “If they’re used properly, they can actually be a support system for workers and improve standards at work,” says Reyna Lopez, executive director of PCUN, a Latinx farm workers’ organization in Oregon.

But so far, Lopez and others say they have not been involved in conversations about the fruit-picking robots. “Historically, farm workers have not been placed at the center of any of these conversations,” they say. Across various industries, including agriculture, waves of automation have led to job losses and a devaluing of human work. Often in the wake of such shifts, “what happens to low-wage workers is that people lose their jobs,” Lopez says.

The emergence of robotic farm workers could even be an opportunity for humans to engage in different – and far less strenuous – work than pruning or harvesting, says Ines Hanrahan, executive director of the Washington Tree Fruit Research Commission. “There’s a lot of folks in rural communities who, even if they would like to, physically cannot do these jobs,” she says.

“When you take the physical aspect out, these tasks become more accessible to older workers or those less physically capable of lugging ladders and things. It enables more people to be drawn into this work.”

Source link

Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!