Home AdExchanger Talks Making The Legal Case For Practical Ethics In AI

Making The Legal Case For Practical Ethics In AI

SHARE:
Betty Louie, partner & general counsel, The Brandtech Group

The best way for a brand to devise a company-wide policy for using generative AI responsibly is to get in there and start experimenting with the technology first.

It’s hard to set a strategy for AI adoption without getting a little hands-on experience, says Betty Louie, a partner and general counsel at The Brandtech Group, on this week’s episode of AdExchanger Talks.

“You have to play around with different tools to even understand where the guardrails might be,” says Louie, whose job involves analyzing the many ethical considerations that crop up around employing AI tools, from privacy and consent to data use and transparency.

The first step is for a brand to decide why they even want to use generative AI in the first place.

Is the goal to help with creative inspiration or creative production? Is it about improving productivity? Helping with personalization? All of the above? Not sure? And where are the pain points?

Brands need to answer these questions – or at least grapple with them – before crafting a practical framework for generative AI adoption.

Emphasis on the word “practical.”

Making broad statements about ethics and morality in a generative AI policy isn’t constructive without touching on detailed use cases and applications.

It’s one thing to say, for example, “We will always be transparent,” and it’s another to drill down into the specific types of information a company will and won’t feed into a large language model (LLM).

The former is an “ethical compass,” Louie says, as in a general direction to walk in, while the latter lays out potential risks and landmarks along the way.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

What slows down many brands in their embrace of generative AI tools is their need for both an ethical compass and a map.

A robust policy should go into “the details as to what a company is willing or not willing to do and what position they want to take,” Louie says. “And that’s something that needs to be done at the C-suite level.”

Also in this episode: The privacy risks posed by LLMs, why creating an ethical framework for AI use is a multidisciplinary task and takeaways from Scarlett Johansson’s flap with OpenAI over a voice for ChatGPT’s app that sounded just a little too much like “Her.”

For more articles featuring Betty Louie, click here.

Must Read

Critics Say The Trade Desk Is Forcing Kokai Adoption, But Apparently It’s Up To Agencies

Is TTD forcing agencies to adopt the new Kokai interface despite claims they can still use the interface of their choice? Here’s what we were able to find out.

Why Big Brand Price Increases Will Flatten Ad Budgets

Product prices and marketing budgets are flip sides of the same coin. But the phase-in effects of tariffs, combined with vicissitudes of global weather and commodity production, challenge that truism.

The IAB Tech Lab Isn’t Pulling Any Punches In The Fight Against AI Scraping

IAB Tech Lab CEO Anthony Katsur didn’t mince his words when declaring unauthorized generative AI scraping of publisher content “theft, full stop.”

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Comic: Gamechanger (Google lost the DOJ's search antitrust case)

Here’s Who’s Testifying During The Remedy Phase Of Google’s Ad Tech Antitrust Trial

Last week, the DOJ and Google filed their respective witness lists and the exhibit lists for the remedy phase of the ad tech antitrust trial. Lots of familiar faces!

MX8 Labs Launches With A Plan To Speed Up The Survey-Based Research Biz

What’s the point of a market research survey that could take weeks, when consumer sentiment is rollercoasting up and down every day? That’s the problem MX8 Labs aims to tackle.