It’s not every day that a journalist on the AI beat for an ad tech trade pub gets to watch a play about a fictional software company set in the agentic age.
That experience is all the more jarring when the fictional company is developing a morally questionable AI tool and conversations begin cropping up among the characters about whether to blow the whistle and involve the press.
It was very (lowercase “m”) meta.
Collecting Data
Last week, several AdExchanger reporters went to see “Data,” an off-Broadway play about AI, surveillance, data tracking and predictive modeling that parallels many of the ways ad technology is used to profile and target people.
The fictional software company in the play, called Athena, is based heavily on Palantir, playwright Matthew Libby said during a post-performance talkback. But the story makes it very clear that even those who are working on AI with good intentions aren’t safe from becoming implicated in more harmful uses.
The play revolves around a recent college graduate, Maneesh, who is working at Athena as a designer on the UX team despite his talent for programming and data science. He’s quickly recruited by the data science team, whose leader seems intent on getting his hands on a powerful algorithm that Maneesh developed in college.
Maneesh, for his part, is apprehensive about joining the team, and adamant that the algorithm remain closed-source. Otherwise, what started as a somewhat innocuous school project to predict rare events in baseball games could easily be used for more sinister purposes.
Without spoiling what happens, this proves to be the case.
Quiet bias
The play repeatedly challenges the idea that humans can be defined by a series of data points.
As one character points out, it’s all too easy to hide behind mathematical code and call AI “objective.” But even if you were to eliminate the use of AI and automation, humans are still imperfect creatures with the same biases that informed the code in the first place.
That’s the quandary I found myself stuck on as the play drew to a close, and I asked Libby about it during the talkback.
AI doesn’t create bias, per se, but it does exacerbate existing biases that have now been built into purportedly objective algorithms. Dehumanization in any form – digital or otherwise – poses a great danger to identity and safety, Libby said.
It’s tempting to think you can know who someone is, what they’ll become or “what their value is” if you collect enough data about them, he added. But that’s a dangerous assumption, regardless of whether you make it face-to-face or through a predictive algorithm.
At what cost?
But these sorts of predictions power every corner of ad tech.
Advertising is all about data collection and predictive modeling, which is not so different from the tools and algorithms created by companies like the fictional Athena.
For instance, Palantir has been developing a mapping tool for ICE to target immigrants for detention and deportation based on details like geographic location.
The parallels between the data collected for advertising and for government purposes like immigration aren’t just abstract analogies. Earlier this year, ICE put out an RFI that asked data providers and tech vendors to share information on how their tools and services could help with investigations.
Just because data is initially collected for one purpose doesn’t mean it can’t be used (or misused) for another.
For advertisers, the stakes aren’t as high. If you target the wrong person, maybe you’ll waste some media and have a lower-than-expected ROAS.
Algorithms may not create bias, but they scale it at warp speed. And when they’re used to make major decisions about people’s futures, even a small mistake can upend someone’s entire life – which Maneesh proves firsthand when he tests his algorithm for a more personal use case with distressing results.
And that’s not to mention the ambiguities of consent. Consumers legally consent to data tracking all the time without thinking twice when signing up for social media platforms or accessing websites with cookie opt-in widgets. Still, for most people outside of the marketing world, “opting in” is a vague term that doesn’t make clear just how much personal data they’re relinquishing.
Now imagine that same data being handed over to immigration enforcement without direct consent. According to 404 Media, ICE is collecting addresses from the Department of Health and Human Services, effectively turning information people share to access basic services into a tool for surveillance.
Even when backed by good intentions, the development of technology can quickly lead to harm. We’ve seen this before. Facial recognition technology consistently misidentifies Black people, and ChatGPT often assumes that female users in male-dominated categories (think leadership roles, cybersecurity, etc.) are men.
If you’re building technology, it’s imperative that you address potential biases and speak out against possibly harmful use cases.
AI isn’t a magic wand. It’s a powerful and sometimes dangerous tool.
Which isn’t to say AI doesn’t have its place. “Data” acknowledges that AI’s role is complicated, and Libby is “very sympathetic” to Maneesh’s AI-enthused supervisor, who eagerly points out all of the clerical errors and latencies that automation can bypass.
But we can’t talk about the wins without the losses – or without acknowledging the risks.
The ad tech world likes to talk about AI solely as progress, but progress comes with a cost. How much are we willing to pay?
“Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media. This column is part of a series of perspectives from AdExchanger’s editorial team.


