It takes time to test. It takes resources to test. It takes money to test.

So why test something you know is going to win?

I’ve been asking myself that for years, and for me, it always comes down to ‘Opportunity Costs’ (the loss of potential gain from other alternatives when one alternative is chosen.) In this case it’s the opportunity cost of slowing down a continuous improvement program so we can make sure we don’t break the first rule of conversion optimization: Don’t make things worse.

So, over the course of my career and the +3,000 tests I have run, I’ve come to determine when it is that I will and won’t test. To skip testing a change, I ask myself 3 questions:

  1. Is the change significant?
  2. Is the change unproven?
  3. Can the change be isolated?

The Change Is Not Significant

This is a “Duh” of course, if the change is not significant, do it. The problem is, sometimes those insignificant changes wind up being significant after all. The best thing to do is check your analytics, throw up a quick heat map, watch some screen recordings (HotJar is great for both of those) and think about how users might be affected on their first visit to the site, then on their return visits all the way through purchase. Most times you will find no reason to test, but the times you do will far make up for the effort it takes to do a bit of due diligence.

Example: Changing a homepage carousel (not recommended by Inflow!) to pause upon mouse over. Sure, that could be considered a best practice and makes tons of sense. After all, how will someone read the slide if it moves while they are reading it (which is one of many issues with carousels).

Common sense says you should stop it on mouse-over. But what if the carousel takes up 90% of your landing page’s real-estate, as they often do these days? Now, your carousel is just a static image stuck on the first slide. Think about what that 1st slide is. Should you test that? I think so.

The Change is Proven

If the change is significant (i.e. it will move the needle enough to notice the improvement), then you should test UNLESS, you have proven to a good degree that the change will be positive.

At Inflow, we run over a 1,000 tests a year across 30+ eCommerce sites and over the past 7 years we’ve found some “wins” we can count on.

Example: Adding a funnel icon with the label “filter” improves mobile conversion rates on every site we’ve ever added it to. So after 4 years and a dozen or so sites, should we test it again and take up time, resources and money?

CRO Sort and Filter icons

The Change Can Not Be Isolated

Let’s say Wayfair did a ton of market research and came up with a hypothesis that their “Down Comforters & Duvet Inserts” link in their menu would be better titled “Down Duvets & Inserts” because their research showed their target demographic more commonly refers to “Comforters” as “Duvets” and that the word “comforter” was associated with lower quality. So, should they test or make the change and observe what happened?

CRO Wayfair Isolate a change

Well, if the change is made and observed (referred to as Serial Testing), they need to make sure they can actually isolate the change when they do analysis. In this case, WayFair could track the clicks through that link, even segmenting by User Type, Source/Medium, Device etc.

But, with a bit of thought, it’s easy to see how this is impossible to do a serial test on. While we can track clicks, we can’t isolate other factors such as changes in search engine keyword rankings, changes in on-site promotions that are more successful in luring customers away from their intended path or, seasonal intentions (remember, every day the category should be performing better since winter is getting closer). These factors make serial testing difficult since conversion rate should go up for the later date range because it is closer to peak season.

So, while it seemed straight forward initially, in the end, this is a change that can’t be adequately isolated and should be A/B tested, despite what common sense says at first glance.

Invest in Testing

Skip every test you can in order to speed up improvement of the site, BUT… don’t do it at the cost of sales. Users are unforgiving when it comes to experience, and if you are looking at making a change of any significance that isn’t proven and can’t be isolated, you really need to test it to protect your sales and all the previous time, resources and money you put into your site. It is an investment after all.


Now that you know when to test, click on the image below to get a free eBook and learn how to run and report on tests so you can get the most accurate results!