When I started my career at an agency in New York, I didn’t really know the difference between consulting vs. working on an in-house team. This was a big part of the decision to move into product management for my next position. As a consultant, I felt like I was missing part of the picture not having worked in similar environments to those of my clients. Having seen both sides of the house now, I’m excited to be back in a position to share what I’ve learned and help site owners convert as many users as possible.
Almost everyone has attempted some kind of optimization in-house before approaching a consultant. One of the challenges of being an optimization consultant is that nearly everyone you work with has preconceived notions of what their testing program should look like. Below are some of the most common issues I’ve seen with these programs, both with clients and on my own product testing.
Choosing what to test in a vacuum
One of the worst things you can hear as an optimization specialist is “we’re running this test because someone more important said to.” Most people have heard something similar at some point in their careers, and good marketers cringe just thinking about this. Even if you are the main decision maker, it’s easy to fall into the trap of “knowing” what to test based on your own experience. With all the tools available to marketers today, doing your research prior to testing is easier than ever. At the very least, testers should find clues in analytics as to which experiences are problematic. Other fairly cheap resources such as user tests, eye tracking and heat maps are readily available as well. Seeing how real users who don’t inherently know your product or service respond to your user experience is one of the most valuable tools at your disposal when coming up with a testing wish list.
Choosing Poor KPIs
So you’ve decided which poor experience to address, but how do you know what to change if you don’t know the goal of the page? In my experience, choosing the right KPI to really determine success is one of the hardest things for people to do. Common problems here include:
- choosing a KPI that is too far away from the current experience being tested
- choosing too many KPIs, some of which could be at odds with each other
- choosing a KPI that is not relevant to the current experience
For example, if you are testing a change to a category page, AOV is both too far away and not relevant enough to the changes you’re making. Make sure that when you select KPIs, they are within a step or two of your test page, and that the changes you are making actually have a chance to impact that metric.
Prematurely Reporting on Tests
This is one of the hardest habits for inexperienced testers to break, and is frequently driven by stakeholders higher up in an organization. Resist the urge to take any learnings out of a test before it has run for an entire week. Every business has different user behavior by day of week and hour of day, so you want to get a clear picture of an entire week’s worth of behavior. If you are concerned about a negative test killing conversions, throttle the test to send less traffic, don’t run it for a shorter period of time. Similarly, if you report on a test with very few users, the results tend to be inaccurate. A good rule of thumb is not to trust data where the test has seen less than 500 users per test version.
Assuming Test Results Will Hold in Perpetuity
When you run a test, you are taking a snapshot of the users who came to your site during the test. This is not an analysis of every user who ever has and ever will come to your site. If you run a test with a 20 percent lift and high confidence, that is a great success, but it does not indicate that you should expect a 20 percent lift forever moving forward. The lift refers only to what was measured during this time, and the confidence only tells you how certain you are one version is better, not how likely that lift is to sustain indefinitely.
Most of these problems center around two key things – good planning and setting proper expectations. A good optimization consultant can help you not only avoid these pitfalls, but also help get the rest of your company on board with the plan.
Hello Krystal,
Could you provide more detail? Where are the emails coming from? We don’t have a comment subscription option on this site so I’m having trouble being of assistance without more information. Are there any “unsubscribe” or “manage subscriptions” links in the emails?
When I initially commented I appear to have clicked the -Notify me when new comments are added- checkbox and from now on every time a comment is added I receive 4 emails with the same comment. There has to be a way you are able to remove me from that service? Aptcariepe it!
Fantastic post Micheal. Good to see you jumping in with both feet so quickly. Welcome to the team!