Home Publishers AI Is Optimizing LADbible’s Prebid Timeouts

AI Is Optimizing LADbible’s Prebid Timeouts

SHARE:

AI is automating one of publishers’ most thankless but essential tasks.

No, not content generation. This time, the machines are taking over Prebid optimization.

To wring more revenue from open auction, publishers often adjust the parameters of their Prebid wrappers. For example, a longer timeout window for the Prebid auction can yield more revenue by giving bidders more time to process and complete their bids. But a longer timeout can also impact the reader’s experience. Finding that sweet spot between revenue and user experience is a complicated balancing act.

To strike the right balance, publishers A/B test different setups to measure revenue impact and the onsite experience. And this task gets even more complicated when publishers operate multiple sites, which can have different wrappers for different regions.

To find a more efficient approach, LADbible tried out a new AI optimization feature for Demand Manager, Magnite’s Prebid management and analytics tool. The new feature, which Magnite announced today, provides publishers with automated Prebid timeout optimization recommendations they can try on a portion of their traffic.

In early tests, 80% of Demand Manager’s recommended Prebid wrappers outperformed publishers’ existing setups.

LADbible has only been using the new feature for about a month, so it’s still too soon to say whether its recommendations consistently yield higher revenue, said Ben Elshaw, Director of Operations at LADbible. But the early results have been promising. Only Google Ad Manager provides automated optimization recommendations to this extent, he added.

Manual vs. automated testing

Manual A/B testing has been part of Demand Manager for about three years now, according to Magnite VP of Product Management Matt Tengler, who heads the Demand Manager dev team.

But the new feature automatically creates A/B test segments so publishers can test algorithmically derived suggestions against existing Prebid setups.

LADbible has used Demand Manager since 2019, and since then it has conducted several manual A/B tests to compare different Prebid auction dynamics. For example, it’s tested timeouts, number of bidders and whether certain bidders perform better on the client side vs. the server side, Elshaw said. “This allowed us to create geo-specific wrappers for our core markets and have the most optimal SSP partners competing together.”

However, consistently running A/B tests across LADbible’s numerous publisher brands – LADbible, UNILAD, Tyla, GAMINGbible, SPORTbible, UNILAD Tech – and across different markets is complex and better served by automation, Elshaw said.

“We’re now able to create recommended A/B tests with a click of a button rather than having to manually create experiment wrappers segmented by the parameter we want to test,” he said.

Auction timeouts

Demand Manager’s new automated testing feature is currently solely focused on adjusting Prebid auction timeouts, Tengler said. Magnite started with that functionality because it can have a major impact on a publisher’s eCPM (averaged CPM), he added.

There are a variety of factors for how ad auctions work that could be affected by an auction’s timeout settings.

“Some bidders are slow, and some are fast. So, if you have a revenue dependency on slower bidders, that would trend toward a higher timeout,” he said.

For example, if a publisher is calling a bidder that takes about 900 milliseconds on average to respond to an auction, but its Prebid wrapper is configured to time out at 1,000 milliseconds, that publisher is likely missing out on a lot of valid bids, Tengler said. By increasing the timeout time a few hundred milliseconds, that publisher could get more bids from that bidder, driving up revenue.

Similarly, publishers can create different Prebid wrappers for mobile and desktop traffic and optimize those wrappers according to how the auction process works in different geographic areas. Since countries have different computing power for mobile devices, it might make sense to set higher auction timeouts for regions with slower mobile service.

The new feature surfaces these types of recommendations within Demand Manager’s user interface. The publisher can decide whether to dedicate a segment of traffic (typically 10%, although this can be adjusted) to testing the new Prebid setup.

However, increasing the Prebid timeout could also negatively affect page load speeds, and therefore, user experience. Demand Manager includes information about Google’s Core Web Vitals, which evaluate a site’s performance against benchmarks derived from other sites, in its recommendations so publishers can weigh the benefits of any changes to their Prebid wrapper against changes to the on-site experience.

When LADbible opens an optimization recommendation in the Demand Manager interface, it shows the expected CPM lift and Magnite’s assessment of the experiment’s likelihood of success, Elshaw said. “Provided that the recommendation is within a range we are happy with – e. g, the recommended timeout is below 2,000 milliseconds – then we are happy to initiate a test.”

Next steps

Although the automation feature is currently only suited to evaluating timeouts, LADbible is interested in expanding its use to test new ID solutions and bidders on a split of traffic before fully integrating them, which the publisher currently does manually.

“[The automation feature] hasn’t yet fundamentally changed our approach to A/B testing in Prebid, however, we see this becoming a larger part of our strategy in testing out different wrapper configs, such as using experiments to split traffic both client and server-side for specific bidders,” he added.

That functionality is among Magnite’s planned next steps. The number of parameters that can affect Prebid revenue is too much for publishers to manage on their own, according to Tengler.

Between which IDs and which tech vendors to use, and the best order in which to call bidders, “the permutations explode really quickly, so it’s not really addressable by a human,” Tengler said. “Just like everywhere else in ad tech, we’ve got to figure out how to bring the machines to bear.”

Must Read

AdExchanger Senior Editors Anthony Vargas and Alyssa Boyle.

POSSIBLE 2026: AdExchanger's Hot Takes

AdExchanger Senior Editors Alyssa Boyle and Anthony Vargas share their takeaways from three days chatting about agentic AI at POSSIBLE.

Reddit Reports A 75% Boost In Q1 Ad Revenue As It Reaches For 100 Million Daily US Users

Generative AI search has pushed traffic off a cliff across most of the internet, but not on social platforms. Reddit included.

POSSIBLE 2026: Can AI Help Agencies Finally Break Down Those Silos?

Domenic Venuto, indie agency Horizon Media’s chief product and data officer, sat down with AdExchanger during POSSIBLE at the Fontainebleau in Miami to unpack the role of AI in today’s media and advertising landscape.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Google Touts Its AI Ad Tech Adoption And New AI Max Features

Google announced new features and ad types for AI Max, its AI-based bidding product for search and shopping or sponsored product ads. The company also touted “hundreds of thousands” of advertisers using AI Max.

Hand pressing blue AI button on keyboard. Digital collage of artificial intelligence interface.

Meta’s Ad Machine Is Purring, So Why Did Its Stock Drop?

Meta’s Q1 call sounded like an AI and hardware pitch, but under the hood it was still about one thing: investing in AI to squeeze more money out of its ads business.

Alphabet Exceeds $100 Billion In Q1 And Its Profits Almost Doubled

Alphabet earned $109.9 billion in Q1 this year, up from $90.2 billion a year ago. And that’s not even the truly gobsmacking number.