A/B Testing

Why perform A/B testing?

For the same batch of contacts, we can only send marketing campaigns with the same subject, text preview, sender name, sender address, and email content. However, it is worth noting that different subject lines or content can have varying impacts on open rates, click-through rates, and sales. Through continuous testing, we can validate what our customers truly prefer in terms of subject lines and content based on data-driven insights.

 

A/B testing of SmartPush

SmartPush's A/B testing feature supports you in testing your marketing campaigns in different ways:

  • Email subject, text preview, sender name, sender address, and email content are all supported for testing.
  • You can test each of them individually or combine different variations for testing
  • An A/B testing task supports up to 5 versions (which means a maximum of 5 test groups can be set up, and the control group consists of only one version).
  • Testing Group: The emails of this group are prioritized to be sent to a subset of recipients within the testing timeframe.
  • Winning Group: The winning group is not a separate group but rather the testing group that achieves the highest test indicator. Therefore, it is referred to as the winning group.
  • Control Group: The recipients who send the content from the winning group to the control group at the end of the testing period

Other important points to note:

  • The A/B testing feature is currently unavailable for use in the free version.
  • While the minimum requirement for setting up a selected group to conduct an A/B testing task is at least 50 valid (subscribed) contacts, we still recommend selecting groups with over 1,000 contacts to ensure an adequate sample size for more accurate testing results.
  • The A/B testing task only supports sending to contacts who have subscribed to your emails.
  • Currently, A/B testing is only supported for email campaigns.

 

Next up, you will be introduced to how A/B testing works, how to set it up and activate it.

How A/B testing works?

1. Define the testing objective

Set up a testing objective: Select one from open rates, click-through rates, and number of orders.

The system will determine the winning group based on the highest test indicator chosen by you within the testing time range.

  1. When the test indicators are all the same, the priority for finding the highest indicator will be in the order of open rate > click-through rate > number of orders. If they are still the same, Version A will be considered the winner.
  2. When all the test indicators are 0, Version A will be considered the winner.

 

2. Confirm the testing content

The email subject, text preview, sender name, sender address, and email content can all be used as your test content, either individually or in combination with other elements.

3. Define the test audience and sample size

Set the allocation ratio for the test group and control group, which determines the number of recipients in each test group. The test group can range from a minimum of 10% to a maximum of 100%. When there are multiple test groups, the allocation ratio remains the same for each group. When setting the test group to 100%, there won't be a control group, and all recipients will be sent at the designated sending time together.

4. Define the testing duration

Since open rate, click-through rate, and orders are not immediately generated upon sending the test group, data accumulation requires some time. We recommend setting your testing duration to be at least 24 hours (up to a maximum of 10 days/ 240 hours).

 

How to set up an A/B testing task?

1. Create an A/B testing entry: Go to [Campaign] > [Campaign List] > [Create Marketing Campaign], and you will see the entry as shown below. 

 

2. For each A/B testing task, you need to create at least 2 versions before clicking Next. After filling out Version A, you can click Create New Version to either duplicate Version A or create a whole new version.

3. Email information configuration follows the same logic as marketing campaigns.

4. The task status of A/B testing includes:

  1. Draft: The task status when it has not been Created yet.
  2. Pending: The task status when the test group configures for delayed sending.
  3. Sending: The task status when the test group is currently being sent, indicating a temporary intermediate state. The duration of the sending process depends on the number of recipients selected for the test group. The more recipients there are, the longer the sending process will take.
  4. Testing: The task status when the test group has completed sending and entered the testing phase. 
  • When “manually selecting the winning group,” the remaining recipients will be sent immediately. If there is a large number of remaining recipients, the sending process will take some time. During this period, the status will remain "Testing" until the remaining recipients have been sent, and then the status will be updated.
  1. Paused:
  • Manually paused. Pause operation in the data report.
  • Automaticlly paused. If the test group fails to send or only partially succeeds, at this point, the test group has not been fully sent, and it is unable to accumulate test data properly. The system will automatically pause the A/B testing.
  • After the pause, it can be manually resumed before the end of the testing period, but it cannot be resumed if it has expired. The testing time range is calculated as follows: Test group sending time + testing duration (for example, if the sending time for the test group is set as 2023-06-30 10:00:00 and the testing duration is 24 hours, then the testing end time will be 2023-07-01 10:00:00).
  1. Sent: Either the test group or the control group fails to get sent.
  2. Successfully sent: Both the test group and the control group have been sent successfully.
  3. Failed: Both the test group and the control group have failed to get sent.

 

Interpretation of the data report of an A/B testing task

Data overview

Summarize all metrics such as the number of sends, open rate, click-through rate, and other indicators for the corresponding task in real time.

Support viewing URL clicks and heatmaps, performance in the first 24 hours, and domain data for each test group. When the test is completed and there is a control group, the URL clicks and heatmaps, performance in the first 24 hours, and domain data of the control group will be displayed instead.

 

A/B testing

View A/B testing data mainly. If no control group is set, the data of each test group will be directly displayed.

 

If a control group is set, the data for the testing period and the entire duration will be displayed separately. 

The data for the testing period represents the data generated within the testing time range.

As the open rate and click-through rate of emails change over time after they are sent, the data for the entire duration accumulates real-time data from all time points.

User-specific data

Accumulate data for each group at all time points in a real-time manner. 

Exported data follows the filtering results, meaning if a specific group is selected, only the corresponding group's data will be exported.

 

Best practices for A/B testing

1. Ensure regular and iterative testing

A/B testing is an ongoing process and should not be discontinued based on a single test. There will always be new elements, ideas, and information to test, and you should not shy away from utilizing A/B testing for experimentation. 

If you have just tested the subject line, you can move on to testing the text preview, followed by the email content, then the call-to-action (CTA), and so on. Once you have completed all of these, you can try sending different messages to different audience segments. 

 

2. Test only one variable at a time

Please clarify your testing objectives and ensure that there is only one variable being tested in your A/B versions. Otherwise, you may not be able to decide which variable influenced the results.

If your testing objective is the open rate, please choose one of the following variables for comparison: subject line, text preview, sender name, or sender address.

If your testing objective is the click-through rate, focus on the position, quantity, and size of buttons within the email content and how they impact clicks. 

If your testing objective is the number of orders, optimize the quantity or type of products inserted in different email content. For example, compare the popularity between a section featuring best-selling products and manually added products. 

 

3. Ensure an adequate sample size

Have more questions? Submit a request

Comments