“The Sell Sider” is a column written for the sell side of the digital media community.
Today’s column is written by Priti Patel Powell, senior director of business development and strategy at PubMatic.
Mention an A/B test and many advertisers immediately think about creative assets. Pitting one design or bit of content against another is a fast and effective way to learn and improve before or even in the midst of a campaign. But too often, technical and optimization teams fail to leverage this reliable and accurate approach for performance testing.
Bad A/B testing is the same as flipping a coin. In programmatic advertising in particular, bad testing that yields meaningless results is rampant, giving publishers incorrect information when assessing identity solutions, server-side versus client-side placement, what bidders to use, and auction timeouts.
Proper A/B testing, with more data-informed decision-making, can provide publishers with real guideposts for how to evolve their programmatic strategies and increase their revenue potential.
The case for isolated, split A/B testing
There are a variety of common, but incorrect ways that publishers test their ad technology. Too often, technical and optimization teams hurry through testing, fail to use a control, or overreach with complex multivariate testing.
Some may compare two ID solutions by testing one in the morning and one in the afternoon to see which performs better. Or they add a new demand partner on the server side and simply assess if revenue goes up. In both of these scenarios, outside influences –different testing environments – make the data impossible to compare and can impact the test.
Fair A/B tests consider all variables that need to be equalized and isolate each individually, rather than waste time with haphazard tests that take up valuable development resources and don’t provide real answers. Test one thing against a control with randomized split testing, get a clear answer and repeat.
The four places publishers can improve testing
Last year, many publishers saw big traffic increases, shifts to mobile and video and lots of new revenue opportunities -- all of which can be best monetized through smart testing. Here are the most important elements to prioritize when testing to optimize this year:
1. Make identity providers swappable. Adding an identity provider is only valuable if it drives yield, which depends on who is buying through it. The industry is still in the early stages of identity resolution, and we don’t yet know which ID solutions will ultimately be “must-have.” Now is the time to build in flexibility, so ID solutions can easily be added and compared against one another as buyers make their own choices about which solutions to bet on.
2. Test bidders through rotation. More isn’t more when it comes to monetization. Adding 25 bidders will slow down page performance and eat up whatever net new revenue each one might bring. Testing bidders sequentially isn’t going to result in a fair test. Better to isolate one against another and test in rotation.
3. Find the right spot in the “bell curve” of auction timeouts. To give consumers the best user experience, publishers should use a low auction timeout so everything loads fast. The downside is that slower bids may be lost. There’s a bell curve of optimal page load times, which may change based on elements like the type of content or even the day of the week. Some types of content can afford longer timeouts to get in more bids, while that might be a recipe for revenue loss if people don’t stay on pages very long. Testing timeouts across different elements is the best way to find the best answer.
4. Banish low-performing exchanges to server-side integrations. A lot of publishers will implement 10 partners on the client-side, but there could be diminishing returns after six to eight. In addition to performance degradation, having a smaller number creates a sense of competition. By constantly comparing and testing demand partners against one another and moving lower performers off-site, publishers will not only see more efficient monetization, they’ll have bargaining power.
Testing is the only way to reduce the complexity that’s become the norm in programmatic advertising. Every tech stack is different, every channel requires more technology and more processes.
The only way to create a truly accurate measurement practice is to cut through the complexity with highly targeted, streamlined tests. Getting better at testing is a requirement for getting better at efficient monetization.