Split Testing Through Campaign Evolution

In our last post we covered why A/B testing can be difficult for some companies to effectively implement. But that doesn’t mean the principles of testing should be abandoned completely.  An evolutionary process of consistent improvement is a more gradual way of implementing split tests.

Many trainers, consultants, and professional coaches set up a template for a marketing campaign, run it for a period of time until they get sick of it, and then do a redesign which starts the process over again.  While this keeps them up to date on new trends in marketing and technology, it’s not introducing improvements as the campaign runs like A/B testing will.

A/B testing at its best is a duplicate communication with one specific difference.  That difference can then be tested for effectiveness and the better performing treatment is then adopted. Digital marketing campaigns should have some level of repetitiveness especially in layout and design.  These repeating elements can be leveraged as a control and updated one at a time and compared for effectiveness over time in the same way that A/B tests are.

Making gradual split tests while running a digital marketing campaign avoids the common limiting factors of A/B testing but still allows for ongoing testing for gradual improvement.  However, there are a few restrictions to keep in mind.


Time is the primary limiting factor in doing gradual split tests. Because the sends are more spread out, changes cannot be implemented as quickly.  Make sure you allow enough time on a single change to gather sufficient information.  For example, if you have a monthly newsletter you’ll need to run the change twice to validate a changes effectiveness which means each change will take three months to validate.

One Change at a Time

This is really another limiting factor of time but subtly different.  Split testing relies on testing a single element to know that particular change is responsible for an improvement or decline. Having more than one thing changed to speed up the process only serves to invalidate your test.

Same Audience

Since there is a gap of time between treatments you need to keep consistency with the audience. Too many changes into who receives the communication will serve to invalidate the test.


While many elements are repetitive in digital marketing, content often is not.  If you have small elements of recurring content, like an email subject line with repeating title or commonly used social media tags, then by all means test it.  But most of the content variables will not repeat consistently enough to be tested in a gradual ongoing method.


If you plan for these restrictions and formulate gradual split test changes around them, you can gather many of the same insights that A/B tests will provide without dedicating nearly as much time or as many resources.

Why Companies Struggle to Implement A/B Testing in Their Digital Marketing

A/B (split) testing is the most popular and often most effective way of testing multiple versions of an app, email, or webpage to see which version produces better results. However only 27% – 38% of companies actively do split testing. Of the companies that actively do split testing, almost half claim they do it infrequently or inaccurately. So if A/B tests offer the best opportunity to objectively improve digital marketing conversions, why do so many companies skip it entirely?  Split testing often presents technical or resource challenges that smaller companies struggle to overcome.

There are three common limiting factors that prevent trainers, consultants, and professional coaches from successfully implementing and executing A/B tests:


Marketing is often done at a frenzied pace for many smaller firms.  If a marketing campaign is being done rapidly, or worse yet as a fire drill, it’s difficult to consistently produce communications and meet deadlines.  Making time to take on an additional burden of creating a separate version of a communication and reviewing the analytics to glean valuable insight is simply unrealistic.

A/B Testing Tools

There are valuable tools available to facilitate A/B testing.  Some are built in to digital marketing platforms where others can be added on to your existing platform.  However, inclusive platforms or add on components can be technically challenging to implement and incur additional cost.  Increasing the marketing budget or meeting the requirements to leverage the testing tool is often an unsurmountable barrier for smaller firms.

Sample Size

Accurate A/B testing relies on a sufficient sample size.  If a smaller firm’s website traffic or email recipients don’t generate enough raw data then the A/B test will be flawed and runs the risk of providing inaccurate results.

If you are in the majority of companies that don’t do split testing, is it because of a legitimate limitation to execute them?  If so, it doesn’t mean that you can’t objectively assess your digital marketing but it likely does mean that you will need to go about it in a more gradual way. In our next post, we will cover a less robust form of split testing that relies on an evolving digital marketing campaign.