Skip to main content

Create & Manage A/B Experiments

Test what works best for your users with A/B experiments

Flora Sanders avatar
Written by Flora Sanders
Updated today

A/B testing (also called split testing) allows you to compare different versions of your content to see which performs better. By showing different variants to different groups of users, you can make data-driven decisions about what content works best for your audience.

Create an A/B Experiment

  1. Go to your Candu dashboard

  2. Click " Experiments" in the left sidebar

  3. Click "Create A/B Experiment"

  4. Enter a descriptive name (Note: Use names that will make sense to you later when reviewing results)

Choose Your Content Type for Version A

Select either:

  • Inline: Embeds content directly in your existing product pages

  • Overlay: Shows content as a popup or modal over your product

Set Up Version B or Control Group

You have two options:

  • Add Another Version: Create Version B with the same or different content type

  • Set Up Control Group: Users won't see any content (useful for measuring impact)

Add Content Variants

You can test up to 10 variants per experiment. For every version you'd like to test, you can use Candu's editor to create new content from scratch or embed existing Candu content pieces.

Important: The Candu embed component only works for inline content, not overlay content.

Variant settings

  • Double-click each version tab to rename it (it will be pre-filled with " Version A", "Version B", etc.)

  • Drag and drop variant tabs to reorder them (sort order affects how variants are displayed in analytics reports)

  • Variant content types cannot be changed after creation (create new variant instead)

  • Click the three-dot menu (⋯) button on any variant tab to access options:

    • "Duplicate": Creates an exact copy of the current variant

    • "Archive": Removes the variant from the experiment (only available if you have multiple variants)

Configure Experiment Settings

After creating your experiment, on the top bar, click A/B settings. In this section, you can:

  • Pick your audience

  • Toggle option to include a control group

  • Configure audience distribution

  • Add success metrics

  • Exclude participants from other experiments

Choose Segments

Select which user segments will participate in your experiment. You can pick multiple segments, but choose ones that make sense for your test.

Important: You can't change segments after launching.

Audience Distribution

By default, traffic is split evenly between all variants. After selecting an audience, you can adjust percentages for each version.

Note: Your total distribution must add up to 100%.

Progressive Rollout

Progressive A/B rollout means you slowly let more users try the new versions. Toggle this on and use the slider to adjust the percentage of users to expose to the experiment.

Note: You can increase or decrease rollout while the Experiment is live.

Set Up Custom Metrics

Custom metrics allow you to track specific user behaviors and measure the impact of your experiments. Candu supports three types of custom metrics:

  • Conversion Metrics: Track whether users complete a specific action

  • Count Metrics: Count how many times users perform an action

  • Revenue Metrics: Track monetary value from user actions

Before setting up custom metrics, ensure any custom events are being tracked in Candu. Learn more about sending events to Candu.

To add your first metric select a metric type from the "What do you want to track?" dropdown. You can add multiple metrics to measure different aspects of your experiment.

Note: Cannot delete metrics from launched experiments.

Metric Types

Count metrics

Best for measuring frequency of actions and measures the average number of events per user. To configure it:

  1. Select "Count" from the metric type dropdown

  2. Choose the event you want to count (you can enter custom events or Candu interaction labels)

  3. Event source: Choose a custom event or Interaction Label

  4. Timeframe (Optional): Set the measurement window

Conversion metrics

Measures conversion rate as a percentage. To configure it:

  1. Select "Conversion" from the metric type dropdown

  2. Choose or type the event name you want to track conversions for

  3. Event source: Choose a custom event or Interaction Label

  4. Timeframe (Optional): Set how long after seeing a variant users can convert

Revenue metrics

Measure average revenue per user. To configure it:

  1. Select "Revenue" from the metric type dropdown

  2. Choose an event that includes revenue data

  3. Event Property Name: Enter the exact property name that contains the revenue value (Example: If your event has {amount: 29.99}, enter "amount")

  4. Timeframe (Optional): Set the attribution window

Note: Once saved, metrics cannot be edited directly. To change a metric, delete the existing one and create a new one.

Place Variants

Variant placement determines where and how your experiment variants will appear on your website. Each variant needs to be placed before you can launch your experiment. This process defines:

Go to Placement and, for each version, configure your placement settings under "Add a new placement":

Launch A/B Experiments

Before you can launch an experiment, ensure you have:

  • Configured success metrics

  • Placed all variants on your site

  • Defined target audience

When you're ready, click Launch Experiment from the top right. If the launch button is grayed out, click it to see what's missing.

Managing A/B Experiments

We recommend limiting edits to minor changes while your A/B experiments are live as editing may impact your results. To make changes:

  1. Edit any version

  2. Click Launch Experiment from the top right

  3. Select "Update [Variant Name]" to publish changes

Ending A/B Experiments

To implement a winner:

  1. Click the dropdown arrow next to "Experiment is Live"

  2. Select End A/B Test

  3. Select "Move [Variant Name] to Content" for your chosen winner

If your Version A does not perform better than your Control Group or there's no significant difference in the performance of your Version A versus Version B, you can end your experiment.

  • If the next iteration is straightforward, you can Duplicate the A/B Experiment, iterate based on this feedback, and re-launch it.

  • If the next iteration is unclear, this is a good opportunity for additional user research. When in doubt, keep duplicating and iterating until you get the desired results.

Did this answer your question?