Connect with us

Technology

Google to offer more generative AI for creating adverts • The Register

Google plans to roll out generative AI tools that can automatically create online advertising campaigns personalized to users’ search queries.

The Alphabet-owned ad giant is ramping up efforts to inject more AI into its core business areas – such as internet search and digital adverts – as it faces challenges to its dominance.

Jerry Dischler, general manager of Google Ads, pointed out Google has been using machine-learning systems in advertising for a while. Algorithms and neural networks power its Smart Bidding and Performance Max features, for example, to help automate some processes for advertisers and optimize different metrics to capture audiences. It’s been rolling out so-called automatically created assets since last year, which can offer machine-made titles and copy for web ads.

“AI is foundational to Google Ads. For many years, it has been quietly helping in the background, supporting advertisers in maximizing their time and return on investment,” Dischler explained in a blog post

Now, Google is planning to push more generative AI capabilities into the hands of its digital advertisers. Advertisers need only provide a link to a landing page on their website describing their product or service, and Google’s systems will do the rest – automatically producing ad content based on that input.

Or so it’s claimed.

This is how it’ll apparently work in practice: Google’s Performance Max tools will analyze a supplied landing page to generate appropriate headlines, text descriptions, and images to use in Google-served web ads. Advertisers can review these suggested designs, and edit and pick the right text or images to customize the final look. These ads are then shown to Google’s users when they search using keywords automatically identified by the internet giant as most relevant to the advertiser.

“We’re bringing generative AI to Performance Max to make it even easier for you to create custom assets and scale them in a few clicks. Just provide your website and Google AI will start learning about your brand to populate your campaign with text and other relevant assets. We’ll even suggest new images generated just for you, helping you stand out to customers across a wider range of inventory and formats,” Dischler explained.

A Google search for “skin care for dry sensitive skin,” for example, would display ads from skincare brands with headlines generated by AI to match the user’s queries. In this case, the title of the served ad might be “Soothe Your Dry, Sensitive Skin” alongside pictures of people applying lotions.

To us it seems that advertisers can steer the overall appearance of their ads, based on their site content, and then when people use search keywords that match an ad, it is optimized to suit their query and displayed. So not only is automatically generated, it’s also automatically targeted in a more narrow, focused manner – which might get people clicking on them more, if they’re not blocking the banners.

Dischler opined that Performance Max already improves businesses’ conversion rates – the rate at which clicks on an ad actually lead to something tangible, like a product sale or a newsletter signup. The higher the rate, the more effective the ad. Google believes the rate will be boosted further with the help of generative AI, since it will help advertisers create personalized ads.

If someone is looking for “outdoor activities to do in Maui” and has also searched for “activities for kids” and “surfing”, for example, Google’s ad tools could generate a custom advert to promote a local company offering surfing lessons for children in Hawaii, for instance. 

“As always, we’re committed to transparency and making ads distinguishable from organic search results. When Search ads do appear, they will continue to feature our industry-leading clear and transparent ad labels with the ‘Sponsored’ label in bold black text,” Dischler promised.

Meta and Amazon are also reportedly incorporating generative AI tools to create adverts on their own platforms. This may not end well. ®

Bootnote

Google’s YouTube lately started asking some people to remove their ad blockers when watching videos. Now it’s talking about bringing unskippable 30-second adverts to those watching the ‘Tube on their TVs.

 

Source link

Technology

Typo blamed for Microsoft Azure DevOps outage in Brazil • The Register

Microsoft Azure DevOps, a suite of application lifecycle services, stopped working in the South Brazil region for about ten hours on Wednesday due to a basic code error.

On Friday Eric Mattingly, principal software engineering manager, offered an apology for the disruption and revealed the cause of the outage: a simple typo that deleted seventeen production databases.

Mattingly explained that Azure DevOps engineers occasionally take snapshots of production databases to look into reported problems or test performance improvements. And they rely on a background system that runs daily and deletes old snapshots after a set period of time.

During a recent sprint – a group project in Agile jargon – Azure DevOps engineers performed a code upgrade, replacing deprecated Microsoft.Azure.Managment.* packages with supported Azure.ResourceManager.* NuGet packages.

The result was a large pull request of changes that swapped API calls in the old packages for those in the newer packages. The typo occurred in the pull request – a code change that has to be reviewed and merged into the applicable project. And it led the background snapshot deletion job to delete the entire server.

“Hidden within this pull request was a typo bug in the snapshot deletion job which swapped out a call to delete the Azure SQL Database to one that deletes the Azure SQL Server that hosts the database,” said Mattingly.

Azure DevOps has tests to catch such issues, but according to Mattingly, the errant code only runs under certain conditions and thus isn’t well covered under existing tests. Those conditions, presumably, require the presence of a database snapshot that is old enough to be caught by the deletion script.

Mattingly said Sprint 222 was deployed internally (Ring 0) without incident due to the absence of any snapshot databases. Several days later, the software changes were deployed to the customer environment (Ring 1) for the South Brazil scale unit (a cluster of servers for a specific role). That environment had a snapshot database old enough to trigger the bug, which led the background job to delete the “entire Azure SQL Server and all seventeen production databases” for the scale unit.

The data has all been recovered, but it took more than ten hours. There are several reasons for that, said Mattingly.

One is that since customers can’t revive Azure SQL Servers themselves, on-call Azure engineers had to handle that, a process that took about an hour for many.

Another reason is that the databases had different backup configurations: some were configured for Zone-redundant backup and others were set up for the more recent Geo-zone-redundant backup. Reconciling this mismatch added many hours to the recovery process.

“Finally,” said Mattingly, “Even after databases began coming back online, the entire scale unit remained inaccessible even to customers whose data was in those databases due to a complex set of issues with our web servers.”

These issues arose from a server warmup task that iterated through the list of available databases with a test call. Databases in the process of being recovered chucked up an error that led the warm-up test “to perform an exponential backoff retry resulting in warmup taking ninety minutes on average, versus sub-second in a normal situation.”

Further complicating matters, this recovery process was staggered and once one or two of the servers started taking customer traffic again, they’d get overloaded, and go down. Ultimately, restoring service required blocking all traffic to the South Brazil scale unit until everything was sufficiently ready to rejoin the load balancer and handle traffic.

Various fixes and reconfigurations have been put in place to prevent the issue from recurring.

“Once again, we apologize to all the customers impacted by this outage,” said Mattingly. ®

 

Source link

Continue Reading

Technology

What are the current trends in Ireland’s pharma sector?

SiliconRepublic.com took a look at PDA Ireland’s Visual Inspection event to learn about Ireland’s pharma sector and its biggest strengths.

Ireland’s pharmaceutical stakeholders gathered in Cork recently to learn the latest developments and regulatory changes in the sector.

The event was hosted by the Irish chapter of the Parenteral Drug Association (PDA), a non-profit trade group that shares science, technology and regulatory information to pharma and biopharma companies.

The association held a Visual Inspection event in Cork last month, where speakers shared their outlooks on the industry, the regulatory landscape and tips on product investigation.

PDA Ireland committee member Deidre Tobin told SiliconRepublic.com that one goal of the event was to get bring the industry together and help SMEs engage with top speakers.

“The mission of PDA is really to bring people together in industry and to have that network sharing, that information gathering so that we’re all consistent, we all have the same message,” Tobin said.

Ireland’s advantages

Ireland has grown to become a hub of leading pharma companies over the years, with many multinational companies setting up sites here. By 2017, 24 of the world’s top biotech and pharma companies had made a home for themselves in Ireland.

The sector also remains active in terms of merger and acquisition deals. A William Fry report claimed Pharma accounted for 12pc of all Irish M&A deals by volume in 2022.

Ruaidhrí O’Brien, head of UK and Ireland sales at Körber Pharma and a PDA Ireland member, said the country has a “wealth of experience” across various types of pharmaceutical production, such as API bulk and solid dosage production.

O’Brien claimed there’s also been growth in the “liquid fill finish area”, which relates to completed pharma products such as vaccines. During the Covid-19 pandemic, Pfizer confirmed its Irish operations were being used to manufacture its vaccine.

O’Brien also said Ireland has “skilled people” that are in senior levels within companies, which he feels is why existing companies continue to invest and why “we have amazing investments from all the global leaders”.

Regulatory changes

One speaker at the PDA Ireland Visual Inspection event was John Shabushnig, the founder of Insight Pharma Consulting LLC. He spoke about current and upcoming regulation impacting the global sector.

Shabushnig said he sees the overall industry understanding of what it can and can’t do “continuing to improve”. He also said there is better alignment between regulators and industry now “than I saw 10 or 20 years ago”.

Shabushnig spoke positively about the regulatory landscape overall and couldn’t think of any “big misses” in terms of industry ignoring regulation. But he did note that some developing areas in the industry are “a bit unknown”.

“Advanced therapies, cell and gene therapies, there are some unique challenges on inspecting those products that we’re kind of learning together at this point,” Shabushnig said.

But Shabushnig said there are also “big opportunities” ahead with new tools that can be taken advantage of. One example he gave was using AI for automated visual inspection, which Shabushnig described as a “very exciting tool”.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Source link

Continue Reading

Current

Explaining AI Black Box

By Prof Saurabh Bagchi


Prof Saurabh Bagchi from Purdue University explains the purpose of AI black boxes and why researchers are moving towards ‘explainable AI’.

For some people, the term ‘black box’ brings to mind the recording devices in airplanes that are valuable for postmortem analyses if the unthinkable happens. For others, it evokes small, minimally outfitted theatres. But ‘black box’ is also an important term in the world of artificial intelligence.

AI black boxes refer to AI systems with internal workings that are invisible to the user. You can feed them input and get output, but you cannot examine the system’s code or the logic that produced the output.

Machine learning is the dominant subset of artificial intelligence. It underlies generative AI systems like ChatGPT and DALL-E 2. There are three components to machine learning: an algorithm or a set of algorithms, training data and a model.

An algorithm is a set of procedures. In machine learning, an algorithm learns to identify patterns after being trained on a large set of examples – the training data. Once a machine-learning algorithm has been trained, the result is a machine-learning model. The model is what people use.

For example, a machine-learning algorithm could be designed to identify patterns in images and the training data could be images of dogs. The resulting machine-learning model would be a dog spotter. You would feed it an image as input and get as output whether and where in the image a set of pixels represents a dog.

Any of the three components of a machine-learning system can be hidden, or in a black box. As is often the case, the algorithm is publicly known, which makes putting it in a black box less effective. So, to protect their intellectual property, AI developers often put the model in a black box. Another approach software developers take is to obscure the data used to train the model – in other words, put the training data in a black box.

The opposite of a black box is sometimes referred to as a glass box. An AI glass box is a system whose algorithms, training data and model are all available for anyone to see. But researchers sometimes characterise aspects of even these as black box.

That’s because researchers don’t fully understand how machine-learning algorithms, particularly deep-learning algorithms, operate. The field of explainable AI is working to develop algorithms that, while not necessarily glass box, can be better understood by humans.

Thinking Outside The Black Box

In many cases, there is good reason to be wary of black box machine-learning algorithms and models. Suppose a machine-learning model has made a diagnosis about your health. Would you want the model to be black box or glass box? What about the physician prescribing your course of treatment? Perhaps she would like to know how the model arrived at its decision.

What if a machine-learning model that determines whether you qualify for a business loan from a bank turns you down? Wouldn’t you like to know why? If you did, you could more effectively appeal the decision, or change your situation to increase your chances of getting a loan the next time.

Black boxes also have important implications for software system security. For years, many people in the computing field thought that keeping software in a black box would prevent hackers from examining it and therefore it would be secure. This assumption has largely been proven wrong because hackers can reverse engineer software – that is, build a facsimile by closely observing how a piece of software works – and discover vulnerabilities to exploit.

If software is in a glass box, software testers and well-intentioned hackers can examine it and inform the creators of weaknesses, thereby minimising cyberattacks.


By Prof Saurabh Bagchi

Saurabh Bagchi is professor of electrical and computer engineering and director of corporate partnerships in the School of Electrical and Computer Engineering at Purdue University in the US. His research interests include dependable computing and distributed systems.


Continue Reading

Trending

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates 
directly on your inbox.

You have Successfully Subscribed!