A/B split testing is a method of comparing two versions of a single email campaign to determine which version provides the best results. It is a great way to figure out what’s working (and what’s not) in your email marketing campaigns. It’s also useful to help you determine how changes to your campaigns impact the mailbox providers’ spam filters and your deliverability.
It can be used to test almost any element of your email campaign: subject line, offer, copy length, imagery, layout, and much more. When it’s done well, the advantages of A/B testing are more than enough to offset the additional effort.
Plan your A/B split test
- Be sure you have a plan in place about the test’s scope and duration. Return Path’s All About A/B Testing (ebook) is a great resource to help you plan and execute your test.
- Once you have decided which elements you wish to test, execute your test plan and track the results using Return Path Platform or Inbox Monitor.
Use the seed list for your A/B split test
- Set up the entire CoreSeeds list, including reference seeds, for version A (the control version, which is your standard version of the email).
- Set up the entire CoreSeeds list, including reference seeds, for version B (the test version, which has the variation you wish to test).
- Ensure matching IDs are set up correctly for each version, so your tests display correctly and accurately in Return Path Platform or Inbox Monitor.
- Deploy both versions A and B to Return Path Platform or Inbox Monitor along with your next campaign.
- Note the results of each version. What are the inbox placement, spam and missing rates for each version?
- Check your other data sources for each version to help measure success. Did one version lead to more reads, conversions or higher average order value?
- Repeat the test according to your plan until you see conclusive results. If you don’t see a difference in performance, reassess and try different test.