When your ad testing strategy is just plain wrong

Rigorously testing your ads is a good thing… unless you’re testing the wrong things and tweaking your campaigns based on flawed conclusions.

It’s a truism you often see repeated here at Search Engine Land and at industry conferences: You need to test, test and test your ads and creative so that you can run with your winners and cull your losers, thereby increasing ROAS and ROI. All successful advertisers follow this dictum, and often tout the stellar results they achieve thanks to their diligent efforts.

But ask a group of advertisers what are the most important kinds of tests to perform, or what elements are most important to test, or whether you should trust the results suggested by ad testing tools… and you’ll likely get wildly different and seemingly contradictory responses.

This confusion frustrates Aaron Levy, Manager of Client Strategy at digital agency Elite SEM. Aaron is a veteran of hundreds of ad tests and through extensive experience has developed a clear idea of what works – and what doesn’t. He’ll be talking about how to create and execute a robust ad testing framework at our upcoming SMX East conference in New York City this month.

I asked Aaron about some of the misconceptions he’s encountered when testing ads.

It seems fairly obvious that testing is an important part of executing a successful SEM campaign. Are most people doing it right? If not, what kinds of mistakes are they making?

In a word… no. Most people aren’t testing things quite right because they tend to overestimate the value or impact of their account structure on new tests or tools. Think about it like slapping a turbo charger on your old Toyota tercel – a new powerful tests deployed over a structure that was effective in it’s own right won’t work. If you REALLY want a new tool to succeed (Especially in the era of smart campaigns and automation), you need to adjust structures to adapt to new tests.

What are some non-obvious types of tests that you recommend?

I’m a huge fan of gender and geo-driven copy tests. They’re notoriously challenging to test since there’s not really a way to automate. However we’ve seen HUGE performance increases by customizing or localizing ads at scale. It’s a challenge as you can’t write unique ads for every city in the country without breaking Google ads (believe me, we’ve tried), but if there are specific regions that are struggling it’s definitely worth splitting them out with unique tests.

Is there anything that can’t be tested, or is just a waste of time?

Everything is worth testing, but you need to test it right! Testing prospecting on the Google Display Network or YouTube with junky creative won’t work. Testing on those channels and trying to measure them the same way you measure SEM definitely won’t work. If you’re testing a new venture or stretching yourself outside of your comfort zone, don’t try to measure them each the same way. That’s where you wind up wasting your time.

What steps would you recommend people take once they’ve run some tests on ads?

Iterate iterate iterate! Once you find something that works, put it against a new contender. Make sure that you’re taking what you’ve learned over time and apply them to new tests. That, and test your… tests. You might not have to write a new piece of ad copy for every promotion. You may not get a lift from changing your ads every week, and you may just have an ad that can’t be beat. Test your assumptions in addition to the ads themselves.