Home AdExchanger Talks Making The Legal Case For Practical Ethics In AI

Making The Legal Case For Practical Ethics In AI

SHARE:
Betty Louie, partner & general counsel, The Brandtech Group

The best way for a brand to devise a company-wide policy for using generative AI responsibly is to get in there and start experimenting with the technology first.

It’s hard to set a strategy for AI adoption without getting a little hands-on experience, says Betty Louie, a partner and general counsel at The Brandtech Group, on this week’s episode of AdExchanger Talks.

“You have to play around with different tools to even understand where the guardrails might be,” says Louie, whose job involves analyzing the many ethical considerations that crop up around employing AI tools, from privacy and consent to data use and transparency.

The first step is for a brand to decide why they even want to use generative AI in the first place.

Is the goal to help with creative inspiration or creative production? Is it about improving productivity? Helping with personalization? All of the above? Not sure? And where are the pain points?

Brands need to answer these questions – or at least grapple with them – before crafting a practical framework for generative AI adoption.

Emphasis on the word “practical.”

Making broad statements about ethics and morality in a generative AI policy isn’t constructive without touching on detailed use cases and applications.

It’s one thing to say, for example, “We will always be transparent,” and it’s another to drill down into the specific types of information a company will and won’t feed into a large language model (LLM).

The former is an “ethical compass,” Louie says, as in a general direction to walk in, while the latter lays out potential risks and landmarks along the way.

What slows down many brands in their embrace of generative AI tools is their need for both an ethical compass and a map.

A robust policy should go into “the details as to what a company is willing or not willing to do and what position they want to take,” Louie says. “And that’s something that needs to be done at the C-suite level.”

Also in this episode: The privacy risks posed by LLMs, why creating an ethical framework for AI use is a multidisciplinary task and takeaways from Scarlett Johansson’s flap with OpenAI over a voice for ChatGPT’s app that sounded just a little too much like “Her.”

For more articles featuring Betty Louie, click here.

Must Read

Curation Platform Onetag Just Acquired This Creative Tech Startup. Here’s Why

Onetag’s acquisition of creative ad tech platform Aryel equips its curation solution with new tools for tweaking and testing interactive ad creative.

PubMatic Is All In On Agentic AI

PubMatic says adoption of its AgenticOS, combined with strong CTV and mobile demand, set the stage for double digit growth in the second half of this year.

Comic: Always Be Paddling

The Trade Desk Faces Headwinds As Investors Reconsider The Thesis Of Objective Indie Ad Tech

The Trade Desk, once a Wall Street darling, now faces the challenge of rebuilding goodwill across the investor community and the ad tech industry.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Other Than Buying Warner Bros. Discovery, Paramount Skydance’s Priority Is Streaming Revenue Growth

While the outcome of Paramount Skydance’s bid for Warner Bros. Discovery hangs in the balance, Paramount is laser-focused on driving streaming growth.

TV Media Buyers Want Outcomes – So Nielsen Is Introducing More Advanced Audiences

On Wednesday, and in time for the upfronts, Nielsen added more than 200 advanced audience segments in Nielsen ONE, its cross-platform analytics dashboard.

Why Dow Jones Prioritizes Direct Deals To Protect Its Audience Value

In pursuit of ad revenue, Dow Jones is betting on a tried-and-true strategy: direct relationships, first‑party audiences and a disciplined approach to using data to enrich ad campaigns.