So, you have an idea you want to put to the test - let's make it happen! Check out our overview guide if you need a recap of A/B testing. π€
This guide will take you through creating your variants, choosing your A/B Settings, placing your variants, and launching your test. π
For our guide, we'll create a feature announcement A/B test, with one variant being an in-line banner and another an overlay to promote our new Resource Hub!
1. Creating your variants
First, we'll create our A/B Experiment from the Experiments section in the left sidebar and choose which content type we want each of our variants to be. You can compare in-line experiences and overlays.
After hitting Create A/B Test, we'll land in the Editor. This Editor works like our regular Editor, allowing you to create content as usual. Learn more about creating content.
Feel free to drag-and-drop components to build your design from scratch or simply use our templates:
You can easily add more versions in the Editor by clicking + Version in the top left menu. There's no limit to how many versions you can create. However, each new version will require some traffic so that additional versions may delay your final results.
2. Understanding your A/B Settings
Under the A/B Settings, you can select a user segment, decide on a rollout plan, and set custom metrics. To get started, you can head to your A/B Settings tab in the top nav bar. Once there, you'll be able to:
Pick the audience for your experiment
Select your segments
Choose which segments will be part of your experiment audience.
Segments cannot be edited once an experiment is launched.
Control Group
If you compare your content with a Control Group, a portion of the chosen audience will not see your content - it will be invisible to them. Instead, your variant(s) will be compared to your control.
β
Progressive Rollout
Progressive A/B rollout minimizes risk when testing changes. It allows you to slowly let more users try the new versions while monitoring their reactions.
Use the toggle to switch on Progressive rollout, and use the slider to choose what percentage of your selected segments should see the content:
βYou can also increase or decrease your rollout percentage while the experiment is live. A lower rate will 'soft launch' your experiment.
Distribute the audience
By default, Candu will distribute your audience evenly across your variants and Control Group (if you have one).
You can amend the weightings to override this and revert by clicking Redistribute Audience Evenly.
Set custom metrics for the experiment
You can set up multiple metrics from the User Events you send into Candu and native Candu Events by clicking a button.
Count
Count metrics sum the number of times an event occurs for each user. For example, if you send Candu an external event for 'tickets created' and send Candu an event each time a ticket is created, this metric will sum the total count of tickets.
For Count metrics, you'll need to select the Event you wish to count, specify whether it's an Interaction Label (a native Candu Event) or a Custom User Event (external User Event you are passing into Candu), and define a timeframe.
Note: If you want to track the count on a CTA in your experiment, you'll need to copy and paste the Interaction Label into the "Select an Event to track" box.
Conversion
Conversion metrics allow you to track what percentage of users have completed an event at least once. For example, conversion metrics are useful because they can help you compare the percentage of people who have seen a piece of Candu content and clicked on the primary CTA. π
To set up your Conversion metric, select the Event and specify whether you want an Interaction Label (a native Candu Event) or a Custom Event (an external User Event you're passing into Candu). Then, you can specify a timeframe during which the user completes that event.
Note: If you wish to track the conversion on a CTA from your experiment, you'll need to copy and paste the Interaction Label into the "Select an Event to track" box!
Revenue
Revenue metrics allow you to sum an Event property, such as "price." This metric is helpful if you wish to track the total amount of all the purchases from a specific piece of content, such as an upgrade overlay.
To track Revenue, you'll need to send Candu an external Event via eventing, where one of the event properties includes a number. Here's an example of an external Event with a value:
eventing.track('upgrade.click', { amountPaidInUSD: 30 })
You can find our complete guide on calling eventing here.
To set up a Revenue metric, select your external Event from the drop-down, then type in your Event Property Name and define your timeframe like so:
β
3. Placing your variants & launching your A/B test
Once you've set up your Experiment's settings, you're ready to place your variants. You can choose to do this via the Placement tab or our Chrome extension:
Via the Placement tab
To place your content via the Placement tab, first add the URL for where you want your variant to be displayed. Then select the div
to specify where on the page you want the content to show:
Via the Chrome extension
Hit the turquoise Place Versions button to place your content via Candu's Chrome extension. Next, you'll add the URL where you want to add your content and hit Launch URL & extension π
Once the target page is open, select the relevant div
for in-line content and the content's position, specify the URL rules, and/or select how long the content will be displayed before hitting Place Content!
If you compare an inline version with an overlay version, be aware that if other overlays target the page you're placing your experiment, you might see fewer impressions, as users will only know once they dismiss the different overlays.
If you want to run the A/B Experiment quickly, we recommend removing any current overlays from that page and not targeting that page with other overlays during the test period!
4. Launching your Experiment
Once you've set up your experiment's settings and placed your version(s), you are ready to hit the Launch Experiment button π
Please review the overview overlay and confirm that you want to launch your experiment. Once launched, a turquoise banner will signify your experiment is live. π
Additional Notes on Setting Up an Experiment
If you try to Launch your Experiment before completing your settings, you'll be directed to the A/B Settings tab:
If you try to Launch your Experiment before placing your versions, you'll be directed to the place your version(s):
Now that we've launched our AB test, let's analyze our results!
β
Post-Launch > Editing a live experiment:
Editing the Content after an Experiment
We recommend limiting edits to minor changes, such as typos, as editing content midway through an experiment may impact your results. We recommend restarting the experiment if you need to make more meaningful changes.
To edit the content during a live experiment, head to the version you want to change and click Edit Version [A/B/...]. Make any edits, then click Update Version [A/B/...]:
Set up a Progressive Rollout
In the A/B Settings tab, you can add and update a progressive rollout to minimize the risk by slowly letting more users in your chosen segment see the new versions:
Updating a Placement
We recommend updating placements only when needed, e.g., the div
has changed/no longer exists, as moving the variants midway through an experiment could impact your results. Suppose you want to make more significant changes to the location, such as moving your in-line variant from a sidebar to a banner. In that case, we recommend restarting the experiment to avoid skewing the results.
To update the placement of a live experiment, go to the Placements tab in the Editor, select the version you want to edit, click on the pencil icon to make your changes, and hit Save!
Ending your Experiment
Once you're ready to end your experiment, you have a couple of options:
No clear winner: End the Experiment and iterate π
If your Version A does not perform better than your Control Group or there's no significant difference in the performance of your Version A versus Version B, you can end your experiment.
If the next iteration is straightforward, you can Duplicate your A/B Experiment, iterate based on this feedback, and re-launch it.
If the next iteration is unclear, this is a good opportunity for additional user research. When in doubt, keep duplicating and iterating until you get the desired results. πͺ
A clear winner: End the Experiment and make the winning version live π
If Version A performs better than your control and other versions, you can end the experiment, then move Version A to your Content list, retarget to your chosen segment, and set the winner live. π
Once completed, Version A will be accessible from the regular Content page, where you can make changes as you would to any other piece of Candu content.