Home AI Crafting A Conscience For Generative AI In Marketing

Crafting A Conscience For Generative AI In Marketing

SHARE:

When a generative AI tool spouts misinformation, breaks copyright law or perpetuates hateful stereotypes, it’s the people using the technology who take the fall.

After all, a large language model (LLM) generating text or an image “doesn’t use a brain of its own” or understand the implications of what it’s generating, said Paul Pallath, VP of applied AI at Searce, a cloud consulting company founded in 2004 that provides AI services such as assessing AI “maturity,” or readiness, and identifying use cases.

“We are far away from machines doing everything for us,” said Pallath, who held executive data science and analytics roles at SAP, Intuit, Vodafone and Levi Strauss & Company before joining Searce last year. (He’s also got a PhD in machine learning.)

Humans can’t outsource their ethical conundrums to algorithms and programs. Instead, we must “ground ourselves in empathy,” Pallath said, and develop responsible machine learning practices and generative AI applications.

Searce, for example, works with clients to move beyond the abstract. It guides companies through generative AI implementations and helps them establish frameworks for ethical, responsible AI use.

Pallath spoke with AdExchanger about a few hypothetical – but very possible – ethical scenarios a marketer might face.

If a generative AI tool produces factually inaccurate or misleading information, what should a marketer do?

PAUL PALLATH: Understand, verify and fill in the gaps of everything that’s coming out. There will be a lot of content that LLMs create that feels like truth but isn’t. Don’t assume anything. Fact-checking is very important.

What if I’m unsure if an LLM has trained on copyrighted material?

Avoid using it unless you have the rights and explicit permission from the author who has the copyright, because it creates significant exposure for your company.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

The LLM should also spit out the references from which that content has been generated. It’s necessary to check every reference. Go back and read the original content. I’ve seen LLMs create a reference, and the reference doesn’t exist. It just cooked up information.

Say a marketer’s looking for ad imagery, and an LLM keeps returning lighter-skinned people. How can they steer it away from harmfully reinforcing and amplifying biases?

It’s about how you design your prompts. You need to have governance in terms of prompt engineering – a review of the different types of prompts you should be using, typically – so the content coming out isn’t biased.

If you have a repository of approved images, the LLM could create different surroundings, change the color, the clothes or the brightness, or make the image a high-resolution digital image.

For retail companies, if they have permission to use a person’s image, they can fit different apparel on top [of existing images] so it can be part of their marketing messages. They can have brand-approved ambassadors who don’t have to come in for several hours of photo and video shoots.

Should companies pay these brand-approved ambassadors for AI-generated variations of their images?

Yes. You’d compensate for every digital artifact you create with different models. Companies will start to work on different compensation mechanics.

LLMs train on what’s online, so they often favor “standard” forms of dominant languages, such as English. How can marketers mitigate language bias?

LLMs are maturing from a translation standpoint, but there are even variations of the same language. Which region the content is coming from, who has vetted the content, whether it’s true from a cultural standpoint, whether it stands by the belief system of that country – that’s not knowledge the LLMs have.

You need a human in the loop doing a rigorous review of the content that’s getting generated before it’s published. Have cultural ambassadors within your company who will understand the nuances of a culture and how it will resonate.

Is generative AI morally dubious from a sustainability perspective, given the power consumption involved in running LLMs?

A significant amount of computing power goes into training those models.

The metrics that large companies are chasing to become carbon neutral in the next five to 10 years are fundamental to which vendors they choose, so they’re not contributing toward their carbon emissions. They have to look at the energy their data centers use when they make those choices.

How can we prevent exploitation, such as using prisoners or very poorly paid workers to train LLMs, and other bad behaviors by LLM makers?

You have to have data governance and data lineage – in terms of who created the data, who touched the data, even before the data actually lands in the algorithms – and [a log of] the decisions that have been made [along the way]. Data lineage gives you transparency and allows you to audit the algorithms.

Today, that auditability is not there.

Transparency is necessary for us to weed out the nonethical elements. But we are dependent upon the large companies that have created these models to come out with the transparency metrics.

This interview has been edited and condensed.

For more articles featuring Paul Pallath, click here.

Must Read

The IAB Formalizes Its Measurement Initiatives Under Its New ‘Project Eidos’

The IAB unveiled its Project Eidos on Monday, a new program uniting its numerous measurement initiatives under one banner.

John Gentry, CEO, OpenX

‘I Am A Lucky And Thankful Man’: Remembering OpenX CEO John ‘JG’ Gentry

To those who knew him, John “JG” Gentry wasn’t just a CEO. He was a colleague who showed up with genuine care and curiosity.

Prebid Takes Over AdCP’s Code For Creating Sell-Side AI Agents

The group that turned header bidding software into an open standard is bringing the same approach to publisher-side AI agents.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Meta logo seen on smartphone and AI letters on the background. Concept for Meta Facebook Artificial Intelligence. Stafford, UK, May 2, 2023

Meta Bets That Its Ad Machine Can Fund Its AI Dreams

Meta is channeling its booming ad revenue into a $135 billion AI drive to power its “personal superintelligence” future.

Comic: Header Bidding Rapper (Wrapper!)

Microsoft To Stop Caching Prebid Video Files, Leaving Publishers With A Major Ad Serving Problem

Most publishers have no idea that a major part of their video ad delivery will stop working on April 30, shortly after Microsoft shuts down the Xandr DSP.

AdExchanger's Big Story podcast with journalistic insights on advertising, marketing and ad tech

Guess Its AdsGPT Now?

Ads were going to be a “last resort” for ChatGPT, OpenAI CEO Sam Altman promised two years ago. Now, they’re finally here. Omnicom Digital CEO Jonathan Nelson joins the AdExchanger editorial team to talk through what comes next.