The silver lining of appallingly low online conversion rates is that even small conversion improvements can produce big bottom-line improvements.
And whether you listen to the experts, like Neil Patel, or simply observe the experts, like Google (which performs thousands of tests each year – pardot.com) you’ll know that A/B or split testing is integral to improving conversion rates.
But, believe it or not, if you run any kind of testing program, you are in the minority.
Only 44% of companies use some form of testing to improve their conversion rates.
But Don’t Get Smug if You Split Test
Perhaps the biggest misconception tests have is that, because they test, they maximize their conversion rates. There are two reasons that this isn’t necessarily true:
- AB tests are a means, not an end. Just like having a hammer, saw and level does not mean your building will be solid, merely having a testing program does not mean your conversion rates will be optimized, or even improved.In a nutshell, any test program that runs on ‘let’s change the CTA button to orange and see what happens’ has the same chance of wasting time and money, and not producing reliable results, as it does of improving conversion rates.
- The CRO Junkie Syndrome – While most seasoned conversion rate optimizers understand point one, they may not realize that they are CRO junkies.
What is a CRO junkie? If you get a shot of adrenaline every time you see an uptick in conversions during or following a test, you are at risk of being a CRO junkie. (That’s more or less all of us.)
What’s the problem with being a CRO Junkie? First, don’t worry, you’ll not have to check into rehab for this one. Indeed, a passion for better conversion rates is sorely lacking in digital marketing.
The CRO junkie problem is two-fold. First, CRO junkies tend to leap and the first sign of a win. As soon as they see an uptick in rates, they get their shot of adrenaline. Satisfied, they presume the test is a success.
The second problem is they tend to look for the win and the rush that comes with it. And they don’t stop until they score a hit.
It means many negative results, and the valuable lessons they hold, are overlooked, ignored or dismissed as invalid. As outlined in a case study by Unbounce, even ‘failed’ AB tests can be used to increase conversion rates.
You should benefit from every well planned and executed test, regardless of results.
“The goal of a test is not to get a lift, but to get a learning.”
Dr. Flint McLaughlin
The Bad Habits of a CRO Junkie. The quest for the next hit of improved conversion rates means every CRO specialist can fall victim to habits and characteristics that don’t account for the margins of error that are inherent in every test. And their AB test results may not reflect the truth.
1. Incorrect Sample Sizing – How long have you been analyzing your landing page results? Hopefully from the beginning. That means, even if yours is a relatively new company, you could have two or three years of traffic, bounce rate and conversion statistics. For many, the base conversion rate that they set out to improve with a testing program may be the result of analyzing hundreds of thousands of visits, hits and misses.
But the conversion junkie may be tempted to make profound decisions about your conversion funnel based on a comparatively small sample.
To illustrate the problems that inadequate sample sizes can cause, let’s say you want to know which of two local schools offer a better education for your child. If you get two sample test results from each school and find that school A’s results are higher, is school A necessarily the better school? Of course not. The samples you happened to get from school B could be their worst results the results from school A could be their best results.
If, however, you checked every test result from both schools, you could make a very accurate assessment of which school was best.
Known in some circles as ‘statistical power” or , the basic principle is: the larger the sample size, the more accurate the results. (qubit.com)
2. Acting on False Positives – The quickest way to understand false positives and the negative impact they can have on your AB test is to do an AA test. Simply put, an AA test observes the performance of your landing page without comparing it to a variant. Every page goes through cycles of higher and lower performance. The CRO junkie looking for a 5% uptick in conversion rates will find it even in an AA test – or when there has been no actual change to produce the positive result.
Some ‘successful’ AB tests are the result of false positives.
3. Wearing Blinders – As mentioned, junkies tend to look for what they seek – higher rates. But in doing so, they can overlook or dismiss tons of valuable information that may be considered negative of not indicative of a successful test. Just as disastrous is the information that is never discovered because the test is concluded once the sought-after boost is delivered.
4. Failing to Follow-up – One of the best ways to battle incorrect sample sizes, false positives and missed information is to run follow-up tests to validate previous results. A well-rounded testing program will run the same or similar tests at different times, to account for external or unknown factors, like seasons, current events and economic climate, to confirm results.
The CRO junkie might not want to risk the win of a positive result with follow-up tests.
In the End: It’s way better to be a CRO junkie than to not test your landing pages at all. And it’s easy to kick those nasty junkie habits.