Home AI Zefr’s New AI Chief Wants To Prove The Value Of ‘Truthiness’

Zefr’s New AI Chief Wants To Prove The Value Of ‘Truthiness’

SHARE:

Zefr is the latest company to name a chief AI officer. Jon Morra was promoted to the role in February after about seven years with the company.

Zefr is among the brand safety and suitability vendors with a specialty in rating contextual placements across the walled gardens, including YouTube, TikTok and Meta. Over time, Morra has seen content moderation shift from “mass human labeling” (where humans reviewed and manually marked content for violations) to painstakingly training machine learning models on the same rules. Now, he said, large language models have begun to do “pseudo labeling” of content, with a human stepping in at the end just to vet the software’s decisions.

“My job is to figure out how to use AI responsibly, to keep on top of research trends and know how to cut through the noise,” said Morra, who joked he falls asleep most nights reading the machine learning subreddit.

AdExchanger caught up with Morra about the new role and how digital media will adapt (or acquiesce) to AI technology.

AdExchanger: What are your top priorities in your new role?

JON MORRA: First is identification of misinformation. The amount of generative content online that’s not clearly marked [as AI generated] is growing.

Two, scaling our policy effectively in as many languages and modalities as we can.

Third, we have a new initiative around responsible AI. A lot of our customers are creating generative experiences, so there’s a burgeoning market for making sure these experiences are safe and suitable.

Why is it difficult to detect misinformation?

When you’re looking at the GARM [Global Alliance for Responsible Media] categories, a well-trained person can assert whether a piece of content matches a policy. Is somebody committing a crime? Is there a weapon? Is somebody consuming alcohol? People can be trained to do that. Misinformation, not so much.

Asserting the truthiness of something is hard.

Truthiness is an interesting way to put it. Is there not always an absolute truth?

You have two separate problems. There’s not always an absolute truth. But you also have negatives that are hard to prove.

There was a post saying that Joe Biden’s mental faculties aren’t what they used to be. Is that true? Is that false? He’s in his 80s. He’s never been diagnosed with Alzheimer’s. Does he have some other condition? Probably not, but it gets hard to prove a negative.

What would you do to prove a negative?

We’ll find articles from trusted news sources that talk about why that’s probably not true. Ultimately, our policy team makes the call about whether or not they want to add a fact to our database. It’s case by case.

How do you stay ahead of misinformation trends?

Our goal is to stay on top of these facts and be as fast as we can from a reactionary standpoint.

In 2022, we acquired AdVerif.ai, which focuses on misinformation. Zefr also integrates with verified fact-checkers [International Fact-Checking Network members] and public data sources, which we use as our ground truth to train our models to assert, when some new piece of content comes in, whether it’s true or false.

In addition, our policy team hunts for social media trends. Once they find a trend, they try to find a verified fact to say this is proven or disproven, according to some third-party source. We then put that fact into our database and retrain and redeploy our models.

We want to makes sure we get both a global definition of misinformation and a customer-focused definition.

What’s the difference between global and customer-focused misinformation?

Global misinformation would be [misinformation about] anything you would read on CNN or in a major newspaper.

Brand-specific misinformation could be where a brand creates a product and somebody claims that product causes cancer.

Where do you see generative AI going?

Generative models are going in two separate directions. One is bigger. GPT-5 is going to be this monstrous model that’s going to consume a ton of compute power.

The other thing you see is smaller, more targeted models. This is where Zefr is investing: using the big models to understand the world at large, fine-tuning them and creating these smaller models to do one thing really well – in our case, brand safety and suitability.

Where the generative models excel is helping us come up with training data.

What are the implications of generative AI for brand safety and suitability?

The future of brand safety is this ability to run fast. When we have a policy change, no longer do we need to train our crowdsourced reviewers on what that policy change means, get a million pieces of content labeled about that policy change, retrain the model and redeploy.

Now, the cycle of deployment and keeping up with new policies – new content in the wild – has gotten a lot faster.

This interview has been edited and condensed.

Must Read

AdExchanger Senior Editors Anthony Vargas and Alyssa Boyle.

POSSIBLE 2026: AdExchanger's Hot Takes

AdExchanger Senior Editors Alyssa Boyle and Anthony Vargas share their takeaways from three days chatting about agentic AI at POSSIBLE.

Reddit Reports A 75% Boost In Q1 Ad Revenue As It Reaches For 100 Million Daily US Users

Generative AI search has pushed traffic off a cliff across most of the internet, but not on social platforms. Reddit included.

POSSIBLE 2026: Can AI Help Agencies Finally Break Down Those Silos?

Domenic Venuto, indie agency Horizon Media’s chief product and data officer, sat down with AdExchanger during POSSIBLE at the Fontainebleau in Miami to unpack the role of AI in today’s media and advertising landscape.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Google Touts Its AI Ad Tech Adoption And New AI Max Features

Google announced new features and ad types for AI Max, its AI-based bidding product for search and shopping or sponsored product ads. The company also touted “hundreds of thousands” of advertisers using AI Max.

Hand pressing blue AI button on keyboard. Digital collage of artificial intelligence interface.

Meta’s Ad Machine Is Purring, So Why Did Its Stock Drop?

Meta’s Q1 call sounded like an AI and hardware pitch, but under the hood it was still about one thing: investing in AI to squeeze more money out of its ads business.

Alphabet Exceeds $100 Billion In Q1 And Its Profits Almost Doubled

Alphabet earned $109.9 billion in Q1 this year, up from $90.2 billion a year ago. And that’s not even the truly gobsmacking number.