AdExchanger: How do you think about writing a book on marketing, measurement in particular, when we’re in a period of upheaval and nobody knows how things will change?
HOYNE: I think it’s about the spirit of experimentation that exists right now.
[Pre-pandemic], my observation was that, generally, companies were happy to make slight, iterative changes to their business.
Some businesses thought going from physical to ecommerce would be years. During the pandemic, they made those changes in months. It’s no longer enough to skate along expecting the business to be carried forward. Companies are open to new ways of approaching problems.
Preliminary data shows that if you take your really great customers from 2019 and ask, “Where did they go? Are they still with you today?,” most would find that those customers are still around and still spend more than most of your customer base, but their behavior has changed.
What would you to say a smaller brand marketer who may not feel like they can do ambitious experimentation?
I was aware hen I wrote the book that if you say to buy and integrate elaborate CRM and cloud platforms, then you immediately isolate yourself from students who don’t have access to the systems or the SMBs that haven’t built them.
But what’s required for somebody to understand their customer relationships and an idea of lifetime value, it turns out, is remarkably simple. It’s three columns of data. It’s the date of a transaction, the amount of a transaction, and then some identifier – an email address, a CRM ID, a loyalty program number – that allows you to tie together those transactions and say they’re from the same person.
That’s what it takes to build a predictive model based on the relationships of your customers. Then you’re asking questions of that business data: How does the value of our customers change if they downloaded a mobile app or not? Whether they sign up for emails or use the coupon code? What month we acquired them?
Each question requires one more data point, and there’s no right or wrong answer.
One of my favorite examples is a business that changed a customer survey response to ask a positive question in front. So instead of prompting something like, “What can we do better?” They asked, “What have you enjoyed about our service?”
They could still see measurable effects in the customer data sets 18 months later, because it apparently improved perception of the brand and the customer’s responsiveness to the company.
One thing that stuck out to me in the book was getting people to “agree to honor the results” before beginning experiments.
You need a human lens. If a company rethinks how it values customers, that’s going to be a meaningful transformation that means some teams may get more or less budget or headcount or favor.
You create detractors, when actions had more value before.
I remember when Groupon and group discount services for very popular. You’d see people out with coupons for $5 lunches. If you measure that by how many customers we acquired, or how many lunches we sold, the metrics are great. Then you look at lifetime value and, well, most of these people never return. When you make a change to these metrics, some teams’ performance may look worse.
What’s your advice?
Two things I’ve seen work well for this. The first is, before going through any data-driven or measurement transformation, understand what commitment the organization is willing to make based on results.
A lot of companies say, “We’ll let the data decide.” But data can be interpreted or discarded however someone needs it to be. If you can’t get agreement early on, then why even run the test and leave everything open to debate later?
Second, the organization needs time to respond. Often, I’ll see companies adopt a new metric by making a hard pivot. They no longer incentivize the same metric, and the organization is disrupted.
I advocate not to initially force a change in metrics or incentive programs right away. Give three or six months where you surface the metric alongside their pre-existing dashboards and reports.
Give them a column for lifetime value metrics on measurement reports. They’ll ask how come some customers are worth 100 times more than the average? Why are customers coming in from this channel or this campaign more likely to stick around? That’s the change that you want.
For their marketer, how do you justify that up the chain, aside from asking for patience?
Patience is part of it. But I would also say that positioning is most important.
One of the first presentations I gave around lifetime value I thought was a great complete pitch. And the organizational takeaway from the board was completely different. They looked at and said, “Wait a minute, so you have a metric that no longer looks at the short term, but looks at the long term, which means that we can’t judge your performance today? We need to wait six months, or 12 months until we see the ROI?”
Up the chain, it can read as less accountability and it never goes forward.
One useful exercise is to split your customer base into quartiles based on the new lifetime value metric. The top 25% of customers are often a large share of revenue – more than 80% for some organizations. Then you have the bottom 25% of customers, who likely never came back. And I just put a column on at the end that shows how much they’re paying to acquire each customer, and it’s the same for each.
You’re trying to provoke the question, “Why are we measuring our organization this way?”
This interview has been condensed and edited.