Home AdExchanger Talks Making The Legal Case For Practical Ethics In AI

Making The Legal Case For Practical Ethics In AI

SHARE:
Betty Louie, partner & general counsel, The Brandtech Group

The best way for a brand to devise a company-wide policy for using generative AI responsibly is to get in there and start experimenting with the technology first.

It’s hard to set a strategy for AI adoption without getting a little hands-on experience, says Betty Louie, a partner and general counsel at The Brandtech Group, on this week’s episode of AdExchanger Talks.

“You have to play around with different tools to even understand where the guardrails might be,” says Louie, whose job involves analyzing the many ethical considerations that crop up around employing AI tools, from privacy and consent to data use and transparency.

The first step is for a brand to decide why they even want to use generative AI in the first place.

Is the goal to help with creative inspiration or creative production? Is it about improving productivity? Helping with personalization? All of the above? Not sure? And where are the pain points?

Brands need to answer these questions – or at least grapple with them – before crafting a practical framework for generative AI adoption.

Emphasis on the word “practical.”

Making broad statements about ethics and morality in a generative AI policy isn’t constructive without touching on detailed use cases and applications.

It’s one thing to say, for example, “We will always be transparent,” and it’s another to drill down into the specific types of information a company will and won’t feed into a large language model (LLM).

The former is an “ethical compass,” Louie says, as in a general direction to walk in, while the latter lays out potential risks and landmarks along the way.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

What slows down many brands in their embrace of generative AI tools is their need for both an ethical compass and a map.

A robust policy should go into “the details as to what a company is willing or not willing to do and what position they want to take,” Louie says. “And that’s something that needs to be done at the C-suite level.”

Also in this episode: The privacy risks posed by LLMs, why creating an ethical framework for AI use is a multidisciplinary task and takeaways from Scarlett Johansson’s flap with OpenAI over a voice for ChatGPT’s app that sounded just a little too much like “Her.”

For more articles featuring Betty Louie, click here.

Must Read

Google Ad Buyers Are (Still) Being Duped By Sophisticated Account Takeover Scams

Agency buyers are facing a new wave of Google account hijackings that steal funds and lock out admins for weeks or even months.

The Trade Desk Loses Jud Spencer, Its Longtime Engineering Lead

Spencer has exited The Trade Desk after 12 years, marking another major leadership change amid friction with ad tech trade groups and intensifying competition across the DSP landscape.

How America’s Biggest Retailers Are Rethinking Their Businesses And Their Stores

America’s biggest department stores are changing, and changing fast.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

How AudienceMix Is Mixing Up The Data Sales Business

AudienceMix, a new curation startup, aims to make it more cost effective to mix and match different audience segments using only the data brands need to execute their campaigns.

Broadsign Acquires Place Exchange As The DOOH Category Hits Its Stride

On Tuesday, digital out-of-home (DOOH) ad tech startup Place Exchange was acquired by Broadsign, another out-of-home SSP.

Meta’s Ad Platform Is Going Haywire In Time For The Holidays (Again)

For the uninitiated, “Glitchmas” is our name for what’s become an annual tradition when, from between roughly late October through November, Meta’s ad platform just seems to go bonkers.