All Collections
Experimentation
How to set up Experiments in Candu
How to set up Experiments in Candu

This guide will take you through creating your variants, choosing your A/B test settings, placing variants, and launching your experiment πŸ§ͺ

Flora Sanders avatar
Written by Flora Sanders
Updated over a week ago

So, you have an idea you want to put to the test - let's make it happen! If you need a recap on what A/B testing is, check out our overview guide. πŸ€“

This guide will take you through creating your variants, choosing your A/B Settings, placing your variants, and launching your test. πŸš€

For our guide, we'll create a feature announcement A/B test, with one variant being an in-line banner and the other an overlay to promote our new Resource Hub!

1. Creating your variants

First, we'll create our A/B Experiment from the Experiments section in the left sidebar and choose which content type we want each of our variants to be. You can compare in-line experiences and overlays.

After hitting Create A/B Test, we'll land in the Editor. This Editor works the same as our regular Editor, allowing you to create content as you normally would. Learn more about creating content here.

Feel free to drag-and-drop components to build your own design from scratch or simply use our templates:

You can easily add more versions in the Editor by clicking + Version in the top left menu. There's no limit to how many versions you can create. However, each new version will require some traffic, so additional versions may delay your final results.

2. Understanding your A/B Settings

Under the A/B Settings, you can select a user segment, decide on a rollout plan and set custom metrics. To get started, head to your A/B Settings tab in the top nav bar. Once there, you'll be able to:

Pick the audience for your experiment

  • Select your segments

    • Choose which segments will be part of your experiment audience. Please note that Segments cannot be edited once an experiment is launched.
      ​

  • Control Group

    • If you choose to compare your content with a Control Group, a portion of the chosen audience will not see your content - it will be invisible to them. Instead, your variant(s) will be compared to your control.
      ​

  • Progressive Rollout

    • To minimize risk when testing changes, progressive A/B rollout allows you slowly let more users try the new versions while keeping an eye on how they react.

    • Use the toggle to switch on Progressive rollout, and use the slider to choose what percentage of your selected segments should see the content:
      ​

    • You can also increase or decrease your rollout percentage while the experiment is live. A lower percentage will 'soft launch' your experiment.

Distribute the audience

  • By default, Candu will distribute your audience evenly across your variants and Control Group (if you have one).

  • You can amend the weightings to override this and revert back by clicking Redistribute Audience Evenly.

Set custom metrics for the experiment

You can set up multiple metrics from the User Events you send into Candu and from native Candu Events, such as a button click.

  • Count

    • Count metrics sum the total number of times that an event occurs for each user. For example, if you send Candu an external event for 'tickets created' and send Candu an event each time a ticket is created, this metric will sum the total count of tickets.

    • For Count metrics, you'll need to select the Event you wish to count, specify whether it's an Interaction Label (a native Candu Event) or a Custom User Event (external User Event you are passing into Candu), and define a timeframe.

      • Note: If you want to track the count on a CTA in your experiment, you'll need to copy and paste the Interaction Label into the "Select an Event to track" box.

  • Conversion

    • Conversion metrics allow you to track what percentage of users have completed an event at least once. For example, Conversion metrics are useful because they can help you compare what percentage of people have seen a piece of Candu content and clicked on the primary CTA. πŸ™Œ

    • To set up your Conversion metric, select the Event and specify whether you want an Interaction Label (a native Candu Event) or a Custom Event (an external User Event you're passing into Candu). Then you can specify a timeframe during which the user completes that event.

      • Note: If you wish to track the conversion on a CTA from your experiment, you'll need to copy and paste the Interaction Label into the "Select an Event to track" box!

  • Revenue

    • Revenue metrics allow you to sum an Event property, such as "price." This metric is useful if you wish to track the total amount of all the purchases from a specific piece of content, such as an upgrade overlay.

    • To track Revenue, you'll need to send Candu an external Event via eventing, where one of the event properties includes a number. Here's an example of an external Event with a value:

      eventing.track('upgrade.click', { amountPaidInUSD: 30 })

      You can find our full guide on calling eventing here.

    • To set up a Revenue metric, select your external Event from the drop-down, then type in your Event Property Name and define your timeframe, like so:
      ​

3. Placing your variants & launching your A/B test

Once you've set up your Experiment's settings, you're ready to place your variants. You can choose to do this via the Placement tab or our Chrome extension:

Via the Placement tab:

To place your content via the Placement tab, first add the URL for where you want your variant to be displayed. Then select the div to specify where on the page you want the content to show:

Via the Chrome extension:

To place your content via Candu's Chrome extension, hit the turquoise Place Versions button. Next, you'll add the URL where you want to add your content and hit Launch URL & extension πŸš€

Once the target page is open, select the relevant div for in-line content as well as the content's position, then specify the URL rules, and/or select how long the content will be displayed before hitting Place Content!

ℹ️ Note: If you are comparing an inline version with an overlay version, be aware that if there are other overlays targeting the page you're placing your experiment, you might see fewer impressions as users will only see once they dismiss the other overlays.

If you want to run the A/B Experiment fast, we'd recommend removing any current overlays from that page/not targeting that page with other overlays during the test period!

4. Launching your Experiment

Once you've set up your experiment's settings and placed your version(s), you are ready to hit the Launch Experiment button πŸš€

Review the overview overlay and confirm that you want to launch your experiment - once launched, a turquoise banner will appear - signifying that your experiment is live. πŸ™Œ

Additional Notes:

  • If you try to Launch your Experiment before completing your settings, you'll be directed to the A/B Settings tab:

  • If you try to Launch your Experiment before placing your versions, you'll be directed to the place your version(s):

Now that we've launched our AB test let's analyze our results!
​

Post-Launch > Editing a live experiment:

Content edits:

ℹ️ We recommend keeping edits limited to minor changes, such as typos, as editing content midway through an experiment may impact your results. If you need to make more meaningful changes, we recommend restarting the experiment.

To edit the content during a live experiment, head to the version you want to change and click Edit Version [A/B/...]. Make any edits, then click Update Version [A/B/...]:

Progressive Rollout:

In the A/B Settings tab, you can add and/or update a progressive rollout to minimize the risk by slowly letting more users in your chosen segment see the new versions:

Placements

ℹ️ We recommend updating placements only when needed, e.g., the div has changed/no longer exists, as moving the variants midway through an experiment could impact your results. If you want to make bigger changes to the location, such as moving your in-line variant from a sidebar to a banner, we recommend restarting the experiment to avoid skewing the results.

To update the placement of a live experiment, head to the Placements tab in the Editor, select the version you want to edit the placement of, click on the pencil icon to make your changes, and hit Save!

5. Ending your Experiment

Once you're ready to end your experiment, you have a couple of options:

No clear winner: End the Experiment and iterate πŸ”„

  • If your Version A does not perform better than your Control Group or there's not a significant difference in the performance of your Version A versus Version B, you can end your experiment.

    • If the next iteration is clear in your mind, you can Duplicate your A/B Experiment, iterate based on this feedback, and re-launch it.

    • If the next iteration is unclear, this is a good opportunity for additional user research. When in doubt, keep duplicating and iterating until you get the desired results. πŸ’ͺ

A clear winner: End the Experiment and make the winning version live πŸ†

  • If Version A performs better than your control and/or other versions, you can end the experiment, then move Version A to your Content list, retarget to your chosen segment, and set the winner live. πŸš€

  • Once completed, Version A will be accessible from the regular Content page, where you can make changes as you would to any other piece of Candu content.

Did this answer your question?