A B testing

Put experiments at the heart of product

  • by
Reading Time: 2 minutes

How do big companies seem to get their products and user experience spot on? 
Why do Amazon, AirBnB, Google and Facebook seem to work so seamlessly?


They test test test test in every manner possible, they try new things and they are not scared to experiment and are not scared to fail.

One big element of testing and experimenting is being able to report back product changes and their impact. You also need to be able to exclude external influences from experiment metrics.  The scientific community, especially the big pharma industry, have been doing this for years.

Three things underpin this kind of testing

1. A good hypothesis

2. A control group

3. Scientific significance

The first thing I do when thinking about a new feature is to write out my hypothesis. A good hypothesis is a simple constructed statement that can be proven true or false.

I then make sure I have a control group or cohort, something that I want to test a change against, this will become version A or my A/B test.  I always try to use something we already have within the product, this isn’t always possible, especially with new features, but there should definitely be a way to evaluate the new features’ impact.

I earlier said that scientific significance underpins this, this can be debated and has been debated heavily. For me it depends on the size of the dataset you have, it’s impossible to get scientific significance fast when you are selling just 50 items a week.  I do one of two things here, I look for a part of the funnel that has a large set of data or I use the experiment to prove the change has not had a negative impact on a product or in this case the 50 sales.

It’s too easy to blame a dip in conversions or attribute a gain to a change on the site you’ve made, when in truth there could be at least 10 major external factors for the change (weather, seasonality, holidays, pay cycle, marketing, etc.). Using the control group will give you proof that the experiment has had a positive or negative impact with the same external influences.

Use your experiments wisely, let them run for as long as you need, but also, once you have good metrics coming back, end the experiment so that you can try something new. Also, try not to have too many experiments in the same area at the same time, they can easily cancel each other out.

Running experiments can be fun, fruitful and empowering,  If you’re not running experiments, give them go!

Related

  • Simple Funnel Flow ExperimentI run many experiments in my role and knowing how to read the data, and deciding what to do next is paramount to success. In this blog I'm looking at a simple funnel experiment for a user flow that's not converting as well as I would like and from digging…
  • Metrics up, metric down, conversion up, users downStartups are full of metrics and numbers, especially when they start to look for investment. Ask any investor in tech and I would bet you that every one of them has come up against a company which initially looks like they have great metrics, but when they dig deeper they…

Comments

comments