What Is Split Testing (A/B Testing) In Marketing?

*This post was originally published on CastlewoodStudios.com, and is republished here with permission.

meet the chumbeques

From those promoting global corporations down to those crafting ads for the smallest businesses, advertisers of every size know that choosing the wrong words, colors, or images can harm the effectiveness of a particular advertisement, potentially scaring away the very people they’re trying to entice. For this reason, advertisers spend a great deal of time, money, and effort trying to make sure their ads are as effective as possible.

But how can you know beforehand if your ads really will be effective, and not just a waste of time? By testing their ads against one another in a process called multi-variant testing, split testing, or A/B testing. No matter what you call it, if you’re not A/B testing your ads, you’re probably wasting your time and marketing budget on inferior or under-performing ads, and missing out on increased sales in the process.

So Exactly What IS A/B Testing, & Why Is It Important?

First, let’s talk about what A/B testing means. Essentially, you’re creating 2 or more different versions of an ad to see which performs better— kind of like The Hunger Games for advertisements. After the test is complete, the ad that performed best is dissected and analyzed like some alien creature in hopes of discovering what made it tick. The lessons learned are used to correct mistakes, capitalize on opportunities, and make future advertising efforts even more effective.

ab-illustration-b_640x384ab-illustration-a_640x384

Here is a good example of an A and B version for a split test. Notice how everything EXCEPT the pictures is the same; here we’re only testing which pictures perform better

For example, suppose you’re selling a particular brand of diapers. You might assume ads for these diapers would perform better if targeted at young mothers rather than hip young teenagers, but only by testing could you be absolutely sure. After all, testing might reveal a lucrative untapped market of hip young teenagers with bowel-incontinence issues (Okay, probably not, but you get the idea).

After scientifically determining that mothers are more interested in diapers than teens are, the next step would be creating an alternate version of the ad shown to young mothers, only this time changing something like the headline, graphics, or text. After that, both versions of the ad would be run against each other to see which performs better. If the upgraded ad indeed proves more effective, then future ads are modified to take advantage of the lessons learned. If not, the first ad retains its position until a new challenger is ready to be pitted against the reigning champion.

In this manner various ads are tested against each other over and over. The results of each contest are analyzed, changes made, and testing begins again. This process repeats until an ad is developed that’s so powerful it makes young mothers drop everything (except their children, hopefully) and immediately buy those diapers.

gb-ad1a-mobile-screenshot_338x392gb-ad1b-mobile-screenshot_349x399

Here is another example of the A and B version from a real A/B test

Sounds high-tech and cutting-edge, right? Actually, it’s been around for most of the last century! A/B testing got its start in the 1920’s, with a man named Claude Hopkins. Hopkins recognized the value of collecting hard data to base advertising decisions on. Using his pioneering scientific testing methods, he successfully marketed everything from baked beans to body soap. In the process, he helped build brands that are still popular and well-known today, like Palmolive, Bissel, and Pepsodent. (Get his fantastic ebook for free here, no strings attached). Today, nearly a hundred years later, A/B testing is done almost exactly the same way, only with more computers and fewer coal-fired, horse-drawn toasters (that was a thing back then, right?)

The Secret Of How A/B Testing REALLY Works

The easiest way to understand A/B testing is to see it in action, so let’s go through a real world example of how we use it to help ensure our clients are getting the most out of every advertising dollar they spend.

Castlewood creates at least 2 versions of every potential advertisement. This allows us to compare similarities and differences between the ads, and draw conclusions based on their performance, like “more women were interested in buying Product X when we used this particular picture”, or “most people liked the ad with less text better”.

Since each ad variant requires time and money to create, we usually limit ourselves to only 2-3 variants, though if you can afford it, more is typically better. After all, the more ad variations you test, the more information you have to base conclusions and decisions on, and the more effective future marketing efforts should become.

Once those 2-3 variants are ready to test, our staff each chooses the ad they think will do best, and a good-natured competition begins over who is right. Sometimes, even we’re surprised by the results, but these unexpected surprises remind us how important A/B testing is, and why it’s so important to use it before pursuing full-scale investment in an ad campaign.

One real-world example of an A/B test with surprising results springs immediately to mind; the campaign of a client trying to sell adorable little bundles of slobber and fuzz, AKA puppies. Initially, we discussed a variety of ad ideas with the client before narrowing the number of test ads to 3: a picture of the cutest critter by itself, a picture of all the cute critters together, and about 15 seconds of iPhone video of the little monsters running around.

I thought the single critter with the regal pose would prove the most popular; my teammates suspected the picture of the entire litter would melt hearts and open wallets. We were all curious to see how badly the video clip would bomb; after all, the camera was shaky, the lighting was bad, and the video was never intended for anything more than sharing with friends and family.

wvp_videopups_409x480wvp_allpups_436x480wvp_solopup_468x480

Ladies and Gentlemen, Our Contestants

By the end of the first day things were happening pretty much as we’d expected; the solo puppy ad was doing well, the entire litter was almost as popular, and the video ad was pretty much ignored. Satisfied things were going according to plan, we left for the day while the campaign ran its course. We returned the next morning, excited to see which ad had done better, and were blown away by what we found: that humble video had attracted more attention than the other two combined!

One week later, when the test campaign ended, the numbers were as follows:

    • Single puppy ad: 52 clicks, and a conversion rate of roughly 6%.
    • Entire litter ad: 7 clicks, and a 3% conversion rate.
  • Video ad: 880 clicks, and a 12% conversion rate!

Results like these are why we always A/B test, and why you should, too. After all, imagine if we had only run the ad of the entire litter: would we have assumed only 7 people would ever click on the client’s ads? If we’d only run the video ad, would we have assumed that any ad featuring such adorable little beasts would be extremely popular?

The truth is, nobody knows with 100% certainty how an ad will perform until it’s tested. Only A/B testing ads give you information more accurate than just a “gut feeling”, and while gut feelings are great for artists and investigators, business people can’t afford to throw away money on hunches— they need hard data to base decisions their on, and that hard data can only be collected by A/B testing.

Save

Save

Save

Save

Save

Comments are closed.