Skip to main content
All CollectionsExperimentation
Experimentation Best Practices
Experimentation Best Practices

Explore our essential insights for running experiments and AB tests in your product

Lauren Cumming avatar
Written by Lauren Cumming
Updated over a week ago

Top Tips for Experimentation

Check out our top tips for running new practical A/B tests (with or without Candu πŸ˜‰) and supercharge your experimentation strategy!

What to keep in mind before creating an experiment

Before we dive in, two key things to remember about Experimentation are:

  • Often, your results will be different than you expected.

  • The more you experiment, the more your experiments will fail - it's how it goes! However, your rate of learning and overall velocity will increase as you learn which of your ideas do/do not work!

A framework for how to run an effective A/B test.

Define your objectives and goals

Effective AB tests can significantly impact your product's activation and conversion rates. You can start by setting clear and specific goals for your experiments.

  1. What do you want to achieve?

  2. What specific metric or key performance indicator (KPI) are you trying to improve?

  3. How much do you hope to move the needle with your key metric?

If it's increasing user activation rates, improving conversion rates, or boosting feature adoption, please ensure your experiment aligns with your overall growth strategy.

πŸ’‘ In the above, our Goal is to increase conversion rates to our Pro plan by 5%, and we list supporting conversion metrics we will track.

Before starting to design your experiment, it can be helpful to note down all the criteria of your experiment. See a suggested template below πŸ‘‡

An example of a plan for a single experiment.

Hypothesize and Prioritize

Could you write clear hypotheses based on your goals and insights from user feedback? What changes do you believe will lead to your desired outcomes? What have you heard from your users from qualitative and quantitative feedback?

This process will help you to prioritize your experiments based on potential impact and feasibility and make it easier to focus on the highest-impact areas first.

πŸ’‘ In our above example, our Hypothesis is that for Advanced Users, embedded upgrade prompts throughout the product will be more effective than slideout UX at driving upgrades.

A/B Testing and Control Groups

Establish a habit of using A/B testing to make data-driven decisions. With Candu, you can randomly divide your users into two or more groups: your Control group (no changes) and your experimental group, Version A (where you implement the changes). This allows you to compare the performance of different variations without biases.

A few A/B testing best practices:

  • Isolate Variables: Change only one variable at a time. This allows you to isolate the impact of a specific change on user behavior at a time. Avoid making multiple changes simultaneously (e.g., changing the copy, imagery, and medium), as this can make determining the cause of any observed differences challenging.

  • Test Duration and Seasonality: Plan the test duration carefully to account for any potential time-based effects or seasonality that might influence user behavior. A longer test duration helps capture more comprehensive user patterns and reduces the impact of short-term fluctuations. Avoid running tests during significant events or holidays that could skew results.

  • Monitor Results and Avoid Biases: During the test, monitor the results regularly but avoid making hasty decisions based on early data. Be aware of confirmation bias, which may lead you to favor one variant over the other prematurely. Allow the test to reach your desired sample size for accurate insights. For more on this, see our article on analyzing the results of your experiment.

  • Localization: If your product supports multiple languages, it is best to isolate an experiment to one particular language so you can ensure the results are not skewed or biased. We recommend testing on your largest audience base first!

Analyze results and iterate

Once the experiment is complete, analyze your data. Look for statistically significant differences between the control and experimental groups.

  • If the results show that one of the versions is likely better, you should likely elevate the preferred version to "make it live" for all users.

  • If the results are similar, there is probably too high a risk of implementing your new experience or iteration.

  • If the results are precisely the same, there is no point in making any lasting changes.

It is also important to use qualitative methods, such as user research interviews or surveys, to understand and validate why an experiment has or has not worked. Listening to your users and getting their direct feedback is still important in running experiments - you can only rely on the math! πŸ˜‰

After your quantitative and qualitative review, you should be able to make data-driven decisions based on the results. If an experiment is deemed successful, you can implement the changes permanently. If it doesn't produce a statistically significant difference, please consider it an opportunity to learn and iterate on your approach for future experiments. πŸš€

Did this answer your question?