If you were playing AI bingo at the IAPP’s Global Summit in Washington, DC, on Monday, your card filled up fast.
Nearly every vendor booth in the exhibition hall at the event, which is one of the largest annual gatherings of privacy professionals in the US, bragged about its tech being “AI forward” or “AI native” or “AI ready” – pick your buzzword.
But away from the razzmatazz, the mood was a little different. Optimistic, sure, but tempered with realism.
Courts playing catch-up
During one session, someone asked Allison Burroughs, a federal judge for the District Court for the District of Massachusetts: “Is technology outpacing the law?”
Burroughs had a quick answer: “Yes.”
The audience laughed, but a little uncomfortably, because it’s an uncomfortable truth, albeit nothing new. Legislators and courts often lag behind technology and tend to be reactive, because lawmaking and adjudication are slow, deliberative processes.
What is new, though, Judge Burroughs said, is how quickly and sharply AI is widening that gap, in part because of the sheer volume of data that’s out there. People live their lives electronically. We’re terminally online and posting about it.
“There are so many ways that people communicate and generate information on social media that there’s plenty more data,” Judge Burroughs said, “and there are many more places to find it.”
Proliferation problems
You could hear echoes of that same concern across the conference. There’s data everywhere, and AI is a compounding factor. The rules are still being written.
The regulatory environment surrounding AI is expanding across sectors and jurisdictions with new laws and obligations seemingly arriving almost daily, said Danielle Kehl, a senior counsel for OpenAI.
What we’re seeing is “sort of what happened in the privacy world,” Kehl said, which is a proliferation of laws, although the volume and cadence of AI rules and regulations are even greater.
“There’s a lot to keep track of,” she said.
For practitioners, Kehl added, that often means orienting toward the strictest rules rather than trying to optimize for every nuance. (Lots of brands did the same when GDPR came into effect, followed by a wave of US state privacy laws and other international frameworks.)
“If your practices are in line with those laws,” Kehl said, “you’re covering a lot of ground, and then you can sort of adapt to all the other requirements as well.”
‘Inherently hard to comply’
But the real headache isn’t necessarily the number of rules so much as the fact that they often conflict, which makes compliance rather tricky.
“Fragmentation in and of itself is not really the problem,” said Mengyi Xu, a product counsel at Claude maker Anthropic. “It’s where you get inconsistent or mutually exclusive technical requirements.”
For example, some AI laws, like the EU AI Act, expect model makers to monitor how their systems behave in the real world and report any serious incidents. But many large enterprise customers demand zero data retention and refuse to let LLM providers see any of their traffic, which makes that kind of hands-on monitoring tough to pull off in practice.
This mismatch exposes a core tension: the assumption that providers have full control and visibility, which isn’t always true. Cohere, for instance, uses a distributed deployment model where customers embed its enterprise AI directly into their own environments.
Cohere doesn’t see how customers actually use the model, its outputs or how it’s being modified or fine-tuned, said Halak Shrivastava, the company’s global public policy and regulatory affairs lead.
That makes it “inherently hard to comply,” she said.
There is one silver lining for lawyers, though. Educating policymakers around the world “is a full-time job,” said Shaundra Watson, senior director of policy at the Business Software Alliance, a trade group for business software companies whose members include OpenAI and Cohere.
“So I can pay my mortgage,” Watson quipped.
🙏 Thanks for reading! As always, feel free to drop me a line at allison@adexchanger.com with any comments or feedback. Oh, and, real or AI?
