Split Test Strategies - Vine Street Digital

Split Test Strategies – Ad Copy & Audience

Split Test Strategies – Ad Copy & Audience

Split-testing (also known as A/B testing) is considered best practice when it comes to PPC marketing, and even marketing in general. While it might be known by some fancy names or peppered with marketing jargon, the concept is pretty simple. A split test is designed to answer a hypothesis.

Some examples of that hypothesis are:

  • “which message is more appealing?”
  • “which landing page converts better?”
  • “should I get people to call me, or fill out this form?”

Whatever the hypothesis, split testing is designed to test one option against another to see what best serves your goals.

The Digital Marketer’s Lab

If you’re wanting to test something, then it’s time to channel your inner scientist. It’s often not enough to just have two different ads run against each other. You also need to consider the other variables at play. In a good science experiment, you’d often test something against a “control”. You create a situation in which you remove all other factors and variables – so you can isolate the one thing you’re trying to test. Unfortunately, in the marketing game, it’s very hard to put your ad or your landing page in isolation. This means it can be tricky to say with confidence which of your ads has performed better.

An example test

For example, you might be trying to find out whether you should include the price of your product in the ad copy. On one hand, including the price upfront might mean that only people who can afford your product will click. This would likely lead to a better conversion rate, and lower spend on your advertising. On the other hand, including the price upfront might alienate some potential buyers. People might need to see more value communicated to them before they can see the value in the price.

As a result, including the price might mean less sales overall. To test this, you run the ad with the price against an ad without the price. Each ad shows 50% of the time. Seems like a good test, right? It’s a good place to start, but there will be some things you’ll need to consider when performing your test.

How much data is enough data?

First you’ll need to decide how much data you need to make a decision. You’re probably not going to be able to answer your hypothesis with confidence if each ad only receives 5 impressions. Unfortunately there’s no hard and fast rule about how much data you need and it can vary between different campaigns. Some marketers like to get a minimum of 100 users, others prefer for that to be much higher.

When did you collect that data and is it reflective of “normal” behaviour?

You’ll need to consider the length of time to run this test. It might only take you a few days to get all the impressions you want to be able to make a decision, but what if those were only weekend days? Perhaps this behaviour would change if it were run on a Tuesday. Or perhaps the behaviour would change if it were run at a different time of year.

Consumer behaviour can change depending on the time of day, the day of the week, month of the year, or the season. You need only think of Christmas to understand just how drastically behaviours can change. So, if you’re running a test, you need to consider the “when as another variable that could impact your test. You may never know what “normal” behaviour is, but you should take anomalies into consideration.

Is it the test, or the traffic?

The idea behind a simple split test relies on your ad or your landing page being served 50/50. That is, the ad with the price gets 50% of impressions, the ad without the price gets the other 50% of impressions. You might see that the ad with the price is getting the best return on investment, and even the best engagement. It might seem like you have your answer – that ads with the price are best.

However, you need to consider whether it was the ad, or whether it was the traffic that affected the results. All impressions are not created equal, and perhaps the reason the ad with the price did so well is because it was showing up for people typing “buy product” instead of just “product”. Already, the traffic is more qualified, so it wouldn’t be a surprise if the priced ad did better. This can be true of other factors as well – not just what people are searching for.

There might be demographic factors like if your audience are:

  • male vs female,
  • millennials vs baby boomers,
  • local vs international,
  • or even just have two different interests.

If you’re running a split test, you need to consider whether the audience had an impact on your test. After all, the price based ads might work well for people typing “buy”, but that might not be true for other searches.

For science!

Split testing is where science meets marketing. Unfortunately marketers don’t have lab coats and controlled environments to perform their tests. Often tests can take a long time, are complicated, and often need to be followed by more tests in the future. It’s for this reason that split testing has become best practice when it comes to PPC marketing, and to marketing in general. So if you’re not trying to test yet – it’s time to get those goggles on and start now!

Written by Gemma Renton