All Collections
Experimentation
Experimentation Best Practices
Experimentation Best Practices

Explore our essential insights for running experiments and AB tests in your product

Lauren Cumming avatar
Written by Lauren Cumming
Updated over a week ago

Check out our top tips for running new effective A/B tests (with or without Candu πŸ˜‰) and supercharge your experimentation strategy!

Before we dive in, two key things to remember about Experimentation are:

  • Often, your results will be different than you expected.

  • The more you experiment, the more of your experiments will fail - it's just how it goes! However, your rate of learning and overall velocity will increase as you learn which of your ideas do / do not work!

Clearly define your objectives and goals

Running effective AB tests can significantly impact your product's activation and conversion rates. Start by establishing clear and specific goals for your experiments.

  1. What do you want to achieve?

  2. What specific metric or key performance indicator (KPI) are you trying to improve?

  3. By how much do you hope to move the needle with your key metric?

Whether it's increasing user activation rates, improving conversion rates, or boosting feature adoption, ensure that your experiment aligns with your overall growth strategy.

πŸ’‘ In our above below, our Goal is to increase conversion rates to our Pro plan by 5%, and we list supporting conversion metrics we will track too.

Before starting to design your experiment, it can be helpful to note down all the criteria of your experiment. See a suggested template below πŸ‘‡

Hypothesize and Prioritize

Formulate clear hypotheses based on your goals and insights from user feedback. What changes do you believe will lead to your desired outcomes? What have you heard from your users from qualitative and quantitative feedback?

This process will help you to prioritize your experiments based on potential impact and feasibility, and make it easier to focus on the highest-impact areas first.

πŸ’‘ In our above example, our Hypothesis is that for Advanced Users, embedded upgrade prompts throughout the product will be more effective at driving upgrades versus slideout UX.

A/B Testing and Control Groups

Establish a habit of using A/B testing to make data-driven decisions. With Candu, you can randomly divide your users into two or more groups: your Control group (no changes) and your experimental group, Version A (where you implement the changes). This allows you to compare the performance of different variations without biases.

A few A/B testing best practices:

  • Isolate Variables: Change only one variable at a time. This allows you to isolate the impact of a specific change on user behavior at a time. Avoid making multiple changes simultaneously (e.g., changing the copy, imagery, and medium), as this can make it challenging to determine the cause of any observed differences.

  • Test Duration and Seasonality: Plan the test duration carefully to account for any potential time-based effects or seasonality that might influence user behavior. A longer test duration helps capture more comprehensive user patterns and reduces the impact of short-term fluctuations. Avoid running tests during significant events or holidays that could skew results.

  • Monitor Results and Avoid Biases: During the test, monitor the results regularly but avoid making hasty decisions based on early data. Be aware of confirmation bias, which may lead you to favor one variant over the other prematurely. Allow the test to reach your desired sample size for accurate insights. For more on this, see our article on analyzing the results of your experiment.

  • Localization: If your product supports multiple languages, it is best to keep an experiment isolated to one particular language so you can ensure the results are not skewed or biased in any way. We recommend testing on your largest audience base first!

Analyze results and iterate

Once the experiment is complete, analyze your data. Look for statistically significant differences between the control and experimental groups.

  • If the results show one of the versions is likely better, this means you should likely elevate the preferred version to "make it live" for all users.

  • If the results are similar, there is probably too high a risk of implementing your new experience or iteration.

  • If the results are exactly the same, there is no point in making any lasting changes.

It is also important to use qualitative methods, such as user research interviews or surveys, to understand and validate why an experiment has or has not worked. Listening to your users and getting their direct feedback is still an important part of running experiments - you can't rely purely on the math! πŸ˜‰

After your quantitative and qualitative review, you should be able to make data-driven decisions based on the results. If an experiment is deemed successful, you can implement the changes permanently. If it fails to produce a statistically significant difference, consider it an opportunity to learn and iterate on your approach for future experiments. πŸš€

Did this answer your question?