You know that scene in “Being John Malkovich” where John Malkovich, playing a version of himself, enters his own mind through a portal and falls into a world where the only word anyone can say is “Malkovich”?
That’s kind of what it’s felt like to be writing about online advertising for the past three years.
Cookies? Cookies, cookies, cookies. Cookies. COOKIES. Coooooookies.
Privacy lawyers with ad tech vendors as clients know what I’m talking about.
Ad tech companies want to know their cookie-related obligations under (proliferating) US state privacy laws, including whether sharing cookies counts as a sale of personal information and how to handle opt-outs.
“Mitigating the risks around cookies is one of the top issues we deal with now,” said Anna Westfelt, a partner at Gunderson Dettmer and head of its data privacy practice. “Cookies may be going away, but I still get a lot of questions about this from clients.”
And that’s unlikely to change in the near term. With the final phaseout set for the end of this year and multiple new state privacy laws now in effect, privacy lawyers (and privacy pros in general) are gonna be busy.
It’s a bit like what Uncle Ben told Peter Parker, but with a twist: With great uncertainty comes great job security (if you work in the privacy field).
I caught up with Westfelt for a pulse check on what privacy issues keep ad tech companies up at night (other than the end of third-party cookies), the privacy challenges of AI technology and how to avoid getting ensnared by the Video Privacy Protection Act (VPPA).
AdExchanger: Cookies aside, what’s giving your ad tech clients agita?
ANNA WESTFELT: Sensitive personal information and the Meta pixel are both major tension points for clients. They want to make sure they’re handling this correctly. A related issue is helping them understand universal opt-out mechanisms, which provide individuals with more powerful control over their personal data.
The FTC has been paying a lot of attention to third-party tracking pixels, Meta’s in particular. In reaction, and out of fear of unwanted attention, some companies are removing third-party pixels from their sites entirely, but that seems a little extreme. What’s your advice?
This is something I talk to clients about quite a lot. We’ve seen the FTC’s enforcement actions, including against GoodRx and others, and also complaints from many private plaintiffs. The latter usually get settled for not huge amounts of money, but they are a nuisance.
The first step is making sure a client takes an inventory of what they’re actually using. It’s not uncommon for the legal team to not know all of the tracking technologies firing on a website, because the marketing or web development teams don’t consult them.
Companies then must assess whether they have a real business need for these technologies and whether the value is worth the risk. As a risk mitigation strategy, I usually recommend discontinuing the use of pixels on pages with any video content, because there’s the risk of a claim under the VPPA.
What are the main privacy concerns surrounding AI?
Existing and even proposed laws are not equipped to deal with the kind of data processing we see with AI and large language models. Even the EU AI Act, a brand-new law, had to be amended after it was published, because it didn’t take into account foundational models like ChatGPT.
AI models are trained on huge data sets that, inevitably, include some personal data. With that in mind, how do companies honor data subject rights, deletion rights and opt-out rights? If a data subject wants you to stop using their data to train AI models, how do you implement that technically?
It’s also difficult to even know the provenance of data when a model ingests large data sets. How do you know that data has been lawfully obtained?
There are no great answers or solutions yet.
As part of its enforcement regime, the FTC has ordered some companies – Weight Watchers, Amazon’s Ring, the photo app Everalbum and Cambridge Analytica back in the day – to delete improperly collected data and destroy any AI models or algorithms trained on that data. Will we see more of that?
The FTC has made it clear that algorithmic disgorgement is a big part of their enforcement strategy, but I think it will be reserved for more serious cases.
It’s a very extreme remedy for an AI company that has invested time and money into building a product to now have to destroy it. It could signal the end of a company – which is why it’s so important to get your data collection practices correct from the start.
🙏 Thanks for reading! And godspeed to everyone tinkering with those Chrome Privacy Sandbox APIs. (It’s a cat video, of course. Click on it and I promise you won’t be disappointed. 🐈) Oh, and happy early Data Privacy Day coming up on Jan. 28, or whatever the appropriate greeting is, I have no idea.
As always, feel free to drop me a line at [email protected] with any comments or feedback.