"Test! Test! Test!" We've all heard this advice for building a better website. Testing is the heart of creating a culture of growth ― a culture where everyone on your team is ready to gather and act on data to make the customer experience better day by day.
But how do you run a
good test? Is it just a matter of finding something you're not sure about and switching it around, like changing a blue "Buy now" button for a red one? It depends: Did you decide to test that button based on analytics, or was it a wild guess?
Assuming the former, a good test also means that
even if it fails, you’ve still learned something. A bad test may make your website performance worse than before, but it’s even worse if you don’t take those learnings into account in the future.
The key to running good tests is to establish a testing framework that fits your company.
Join us for a live webinar on Thursday, March 9, as Krista Seiden, Google Analytics Advocate, and Jesse Nichols, Head of Growth at Nest, share a six-step framework for testing and building better websites.
Frameworks vary from business to business, but most include three key ideas:
Start with an insight and a hypothesis.
A random "I wonder what would happen if …" is not a great start for a successful test. A better way to start is by reviewing your data. Look for things that stand out: things that are working unusually well or unusually badly.
Once you have an insight in hand, develop a hypothesis about it: Why is that element performing so well (or so badly)? What is the experience of users as they encounter it? If it's good, how might you replicate it elsewhere? If it's bad, how might you improve it? This hypothesis is the starting point of your test.
For example, if you notice that your mobile conversion rate was less than on desktop, you might run tests to help you improve the mobile shopping or checkout experience. The team at
The Motley Fool found that email campaigns were successfully driving visitors to the newsletter order page, but they weren’t seeing the conversions. That led them to experiment on how to streamline the user experience.
Come up with a lot of small ideas.
Think about all the ways you could test your hypothesis. Be small-c creative: You don't have to re-invent the call-to-action button, for instance, but you should be willing to test some new ideas that are bold or unusual. Switching your call-to-action text from "Sign up now" to "Sign up today" may be worth testing, but experimenting with "Give us a try" may give you a broader perspective.
When in doubt, keep it simple. It's better to start with lots of small incremental tests, not a few massive changes. You'll be surprised how much difference one small tweak can make. (Get inspiration for your experiments
here.)
Go for simple and powerful.
You can't test every idea at once. So start with the hypotheses that will be easy to test and make the biggest potential impact. It may take less time and fewer resources to start by testing one CTA button to show incremental improvement in conversion rates. Or, you may consider taking more time to test a new page design.
It may help to think in terms of a speed-versus-impact grid like this. You don't want quiet turtles; the items you're looking for are those potential noisy rabbits.
The best place to begin a rabbit hunt is close to the end of your user flow. "Start testing near the conversion point if you can," says Jesse Nichols, Head of Growth at
Nest. “The further you go from the conversion point, the harder it gets to have a test that really rocks — where the ripple effect can carry all the way through to impact the conversion rate,” says Jesse.
Stick with it
A final key:
Test in a regular and repeatable way. Establish an approach and use it every time, so you can make apples-to-apples comparisons of results and learn as you go.
A clear and sturdy framework like this will go a long way toward making your team comfortable with testing — and keeping them on the right track as they do.
Posted by Tiffany Siu, Product Marketing Manager, Google Optimize 360