When it comes to prioritizing your website testing, it’s pure numbers.
A Formula to SCORE Test Opportunities
It might sound harsh to say insights should lead and not your organization’s leaders, but the reality is good leaders will be the first to agree to prioritize an actionable, impactful insight that has great potential, over that leader’s own whims or gut instincts.
To make your life easier, prioritize testing with a simple spreadsheet that prioritizes potential tests by giving each opportunity a SCORE.
The spreadsheet should have the following columns in addition to the name and description of each test opportunity:
- Insight Driven (Y/N)
- Validated (by more than one source)
- Potential (1-5 with 5 being the highest potential)
- Difficulty to Test/Implement (1-10)
Then use the formula below to prioritize each test opportunity.
Test SCORE =(Potential*5)–Difficulty
With that formula, it’s often easy to what order in which tests should be prioritized. But what if there is a tie? It’s not uncommon that a tie could occur. In that instance, I look toward data. Which test can be validated with additional qualitative insight? The test that wins that tiebreaker is the one to prioritize first.
The above formula scores low-hanging fruit highest while moving items that are harder to implement or test down the list until the potential to test is compelling because there is nothing “good and easy” left to test. The bonus is “noise” that has less potential and requires time will be relegated to the bottom of the list, with a good explanation.
The method of prioritizing tests is not a difficult one, but sometimes, it can seem overwhelming with so many tests that need to eventually be run. It’s always good to have a reminder as to why we came to care about testing in the first place.
Why We Care About Testing
“We’ve run over 100 tests with your test tool, and we’re not seeing anywhere near the same results once we implement the winner,” I said to the representative of a popular test tool company. They were concerned, as concerned as I was, because it was a common issue that no one seemed to be able to answer: Why, when the test results tell you one thing, can’t you track similar results after you implement?
I’ve never focused on testing, as it has always been a means to an end. But when the means potentially derails the end, you need to take control. And that is exactly where we all are with A/B testing today.
I firmly believe that you want to get insights about why customers are not converting on your site, or what would make them act more—and THAT is 90 percent of the work involved in optimizing an eCommerce site. The other 10 percent is coming up with a way to implement and test those insights.
For years I have toiled away on the 90 percent, striving to get the most powerful and actionable (testable) insights possible in the least amount of time because, after all, that is what I was being paid to do across the 80 or so eCommerce sites I’ve worked with. During this time, however, I also came to realize that how we tested needed to change—not to be better, but just to give the correct results.
Knowing and understanding how important testing is, we developed our SCORE sheet to make sure we had a systematic approach to website testing. It made it clear in which order tests needed to be prioritized. Again, deciding which test to prioritize comes to numbers. It translates to the test that is the least amount of work and has the most potential to provide ROI results. That is the one you run first.
Hold your SCORE list up high, share it with everyone, make it obvious how it is compiled and you will be driving the organization forward, not wrestling with the ideas of everyone from the designer to the CEO. Good Luck!
If you want to get a hold on your eCommerce website testing and start getting results that are accurate and helpful, then download our advanced guide, Stop Wasting Your Time When Testing eCommerce Sites.