The director of Google Analytics Marketing, Casey Carey, has an interesting approach to growing his team’s knowledge and competence: Quarterly Failure Reports. These reports consist of a one-page synopsis of tests they’ve run that didn’t exactly go well.
Personally, I wouldn’t use the term “fail” if these reports yield valuable insights that help enhance knowledge and effectiveness — but if the shoe fits, call what it is. Failure shouldn’t become a bad word; it is essential to achieving meaningful results. A colleague of mine used to tell me: “If you’re not making mistakes, you’re not working hard enough.” In the words of Carey, “If your tests are always successful, you’re probably not testing often enough or aggressively enough.”
Running tests, primarily in the context of A/B tests on your website, subjects lines of an email, or an online ad is really pretty simple. Yes, it does help to have a testing tool such as VWO or Optimizely to load, track, test, and report on results. But, in my experience, the prep work is the most important part. Here’s how we approach it:
Create a Testing Schedule
Testing is as much strategy as it is tactics. Think in terms of website optimization. What are those things on your website that, if lifted, could impact inquiries, MQL’s, or downloads? Document those, create hypotheses for each and create a calendar for what you’ll test, one after another. The length of your test will, in part, depend on traffic. One way to determine statistical significance with a test is to create 3 cohorts within a test. Split traffic three ways: Control A, Control B, and the Test group. When your two control groups converge, which could take a few days or a week or so, your test is in the accuracy sweet spot.
Start with a Hypothesis
Running tests just as an “activity” to show you’re doing something is a waste of time. Document your rationale for a test and use that as your hypothesis. If you think a different button color or CTA will have an impact on click-throughs, your hypothesis would be something like “Changing our contact button from red to green will increase click-throughs by 10%.” This is an easy, concrete hypothesis to test. You will know for sure, over a period of time, if this happens or not.
When you’ve tested a hypothesis, document the test (describing the test, the hypothesis, screenshots, the outcome, next steps), report to stakeholders, then move to the next test. I created a testing calendar over several months, in three-week intervals, each test having a summary page, in a binder. It was an invaluable resource both for my team and others in the company.
Test What Matters
Test for results — good and bad. Run tests that will ultimately improve your sales funnel. The net learnings of testing for funnel optimization will always be positive, even if your hypothesis fails. More clicks on a contact button will yield more traffic to a form, which could increase more inbound inquiries. Form testing will reduce friction, which could increase form completions/conversions. Testing a background color on a homepage image probably won’t have a significant result, and will be hard to measure definitely. Many marketing automation tools, like Hubspot, have A/B testing features built in. Use them! It’s not as hard as you think.
Account for External Forces
Testing around holidays is not a good idea: your website traffic will not be “normal.” Testing when your target audience is attending a big trade show also creates an anomaly that could affect your tests. Consider external forces that may have an impact on traffic and engagement and work around them.
I have been fortunate to work with companies and teams that valued “experiments”, but it was only because I came to the table with a well thought-out strategy for what and why. Nothing complicated, no 40-slide marketing pitch. Just a thorough understanding of the user experience on our website, how it contributed to the sales funnel, and what touch points could show a lift with testing. And above all, don’t be afraid to fail.