A snippet from an article by someone named Cameron Chapman: ----8------------------------------------------------------------------------ A Beginner’s Guide to A/B Testing: Email Campaigns That Convert ab-testing-email-campaigns By Cameron Chapman on Feb 10, 2011 Email campaigns and newsletters can be a great way to get repeat business, as well as new customers. You’re already working with a somewhat pre-qualified base: these people have said they want to receive information from you. And a lot of them have likely already done business with you. And we all know it’s easier and cheaper to retain customers than it is to get new ones. This is why it’s vital to run A/B tests when trying out new techniques or formats for your email campaigns. Improving conversion rates here can make a bigger difference in your bottom line than many other marketing efforts, especially those of similar cost. Test Your Whole List, Or Just Part? In the vast majority of cases, you’ll want to test your entire list. You want to get an accurate picture of how your email opt-in list responds to your new email campaign, and the best way to do that is to test all of them ------------------------------------------------------------- User Comments: Dan Feb 12, 2011 at 1:50 pm I’m a little surprised by the advice to A/B test on your whole list. My business has received advice to hold back the bulk of our email send until after the results of an initial A/B test sample have revealed a clear winner. In practical terms, we send version A to ~15% of our list; version B to ~15% of the list; and hold on to the remaining 70% of the list. When your email list is large enough (upwards of 10000), a 30% test send should be enough to hit the point of statistical significance – that is, you could keep testing but the trends have established themselves and any further testing is unlikely to affect the trending results in any meaningful way. By doing this kind of split send, it means we have a chance to send the most effective subject/artwork/call to action/etc to approximately 85% of our email database – not the 50/50 split in a straight A/B test. ------------------------------------------------------------- Good points, Dan. Seems silly to me to potentially saturated or lose potential sales from a poorly crafted ad copy. ------------------------------------------------------------- Another comment: Ash Feb 12, 2011 at 2:37 pm I think email based A/B testing is hard as your open rates (unless you do subject line testing) vary from segment to segment. One segment might have a much higher open rate which makes the content test bias. Helpful guide though ! ------------------------------------------------------------- So Ash is saying it's possible that the first 50% of your database are losers, and the other 50% are going to be your bread and butter. Is it me, or is this theory full of crap?
From the perspective of bulk GI mail, the mere existence of multiple versions of a newsletter seems to help total delivery. However from an ESP perspective or where you know your hitting inbox on both messages on a TLD, a small split test seems pretty sufficient.
The problem with split testing email is that if you only send to a small portion of your list you are not really testing. I've found more often than not that some email creatives perform poorly with the first 25-30% of my list, and really pick up after passing the 30% mark. If you have multiple creatives for one offer and it's a solid offer, just send both. The real key is not using canned creatives that have been mailed millions of times before by other marketers. It also helps a lot if the content of the email matches the landing page you are directing clicks to. Simple things.