This is a newsletter. If you would like to receive future newsletters via email, please use the subscribe button below.
This week's RoboCMO covers two ways a B2B startup’s lack of data should lead its marketers to break marketing conventions.
Mo Audience, Mo Problems
If your business gets any real traction on paid ad platforms, you will eventually get an email. That email will tell you that you have won the cosmic lottery of getting access to a Google/Meta/LinkedIn account manager. Your account manager (and whatever title they may use, they are all account mangers) is here to help you discover new products and features. They are here to help you find new audiences for your ads. They are here to share best practices from the other accounts they manage. But above all, they are here to get you to spend more money on their platform. If you leave this newsletter with nothing else, let it be that.
And it’s not that these account managers will induce you to spend more money by lying or even intentionally misleading you. It’s more that:
They have A LOT of accounts to manage and only know your business at a surface level.
They have received best practices from their employer based on averages across a huge number of customers. Most of those customers do not resemble your business in key ways.
They have related but different incentives than you. While they would prefer your business make a high returns on your ad spend, their number one goal is to increase that spend.
One of the most common and insidious pieces of advice - nay, mantras - that these account managers give their B2B startup accounts is that bigger audiences are better. If only you, B2B startup marketer, could give their magic algorithm 10X as much audience to choose from, you would surely double your ROI on their platform.
This may be sound advice for most B2C advertisers, but fails most B2B ones for two big reasons:
The Ad Platforms Struggle to Infer Crucial B2B Filters:
The platforms already know more about the consumer behavior of the majority of Americans than most B2C startups could ever hope to. In B2B, however, the biggest determinants of whether a Google or Meta user will be interested in my product are often their employer and job function.
Those are audience filters you can apply in all the ad platforms through one mechanism or another (3rd party filters, uploaded lists, etc..). And in contrast to the platforms' abilities to use seemingly unrelated data to pinpoint the consumer preferences of their users, they seem to struggle at inferring their user’s job function, title/seniority, company size, industry, etc…
B2B Audience Members are a Needle in the Ad Platform Haystack:
Most consumer products have target audiences of a million or more US residents. Not so in B2B. Imagine a company selling software to law firms with 20 or more employees. They have around 8,000 potential customers in the US1. Even if we assume there are 5 people at each firm that would be good leads, that’s a total audience of 40,000 people.
If the lawyer software company has added first- or third-party firmographic filters to sculpt tight-fit Meta, Google, and Bing audiences of 50,000 people, will performance improve by broadening the audience to 500,000? Actually, Meta’s off the rack advice is to start with 2-10 million, but I’ve never heard one of their AMs be quite that bold on my B2B accounts.2
What does it all mean for you, B2B startup marketer? Do not “start broad.” Start your campaigns running against audiences you have high certainty are a fit, and if you find success there, test the successful creative on adjacent audiences. Meta will tell you that it’s crucial to hit 50 conversions per week for your campaigns to be optimized, but don’t sweat it. “Optimized” is not binary. Having 50 conversions is better than having 40, all else being equal. But a campaign with a tight audience producing 20 conversions per week can easily outperform a broad one doing 50.
Death by a Thousand Button Colors
About the same time that your B2B startup is seeing traction in paid ads, you’ll likely start trying to optimize the conversion rate of your webpages. And in doing so, many of you will feel bound by age old wisdom that you seem to have known your entire career. Perhaps your entire life (was it from a nursery rhyme? a lullaby? absorbed through atmospheric factors in utero?). “Only test small changes, one variable at a time, so that you a certain of the impact of each change.”
I am here to tell you, B2B startup marketer, this may be good advice for your comrades at Amazon, who receive more conversions in an hour than you can hope to in a year. But it is terrible advice for you.
Your B2B startup likely gets between 50 and 500 leads per month. Whatever it’s going to get this year, it probably got fewer last year, and even fewer the year before that. The pages you are trying to optimize, my friend, are very un-optimized. Whereas Amazon has spent billions of purchases optimizing its pages, your website may have seen fewer than 10,000 conversion events in its entire existence. It’s almost certainly seen fewer than 100,000. What makes you think that the current version of your page is only a button color away from making the most out of your traffic?
Imagine that your company is a fishing boat. Your job is to catch as many pounds of fish as possible with that bait this year. You get to move your boat each week. Week 1 drop your nets off the Alaskan coast and catch 1,000 lbs. What are you going to do week 2? Move 100 yards away? Or see what’s biting in one of the bazillion other spots you could be testing? Are you more likely to see huge gains 100 yards or 100 miles away? How about 10,000 miles away?
Test big, B2B marketer!
“Oh, but if we test two completely different pages,” you say, “How will we be certain which small change is responsible for the results?” To what end, that certainty?
Imagine we test two colors of “Subscribe” button on the current RoboCMO homepage - blue and red. The blue button wins. The results are statistically significant. We’ve done everything by the book. What prize have we won? We know that on the current version of my home page, a blue Subscribe button is more effective than a red one. What if I want to use a different version of the homepage in future? Can I port my rock solid learnings from this page over to that one? I can hope. But as soon as we change anything else about the test, we’ve lost some confidence that we have the optimal button color.
“But big tests take more design and dev resources,” you say. True. You will spend more dev and design testing big than testing small. But testing small costs you something far more valuable - time. You end up wasting your company’s precious web sessions, the currency of conversion rate optimization, for changes that barely move the needle. It’s worse than that, actually. Precisely because the small changes result in small gains or small losses, they take much longer to get to statistically significant results.
And for all these reasons, my advice here is roughly the inverse of my advice on ad platform audiences. With testing, start big And keep testing big, until you have a high degree of certainty that you've found a version of the page you'll have for a long time. Then start tweaking.
Until next week,
TB
https://www.statista.com/statistics/822038/us-legal-services-market-firms-size/
https://www.facebook.com/business/ads/ad-targeting