Very good article here, some of you might have been doing it the wrong way (slightly guilty as charged!) ... Optimizing Email Marketing: Percentage of Previous One mathematical concept that email marketers struggle with is the concept of percent of previous. In email marketing, most metrics dashboards have a series of steps in their funnels. The problem is, most of the time, these metrics are presented as absolute percentages of the entire campaign, which makes it difficult to assess how well individual pieces of your email marketing is working and where things are breaking. Let’s take a look at an example. Say you send 10,000 messages out to a list, 1,000 people open them, 100 people click on calls to action in them, and 10 people convert to the buying behavior you’re looking for. If you view these actions as absolute numbers, your reporting looks like this: You might logically assume that you have a real conversion problem since 0.1% of your list converted, but in this case, the absolute data would be misleading you. If you charged ahead trying to fix conversion all over the place, there’s a good chance you wouldn’t get the needle-moving results you’re looking for. If you create a column that details percent of previous, things look different: Suddenly we see that each step in our funnel is converting at roughly the same amount – 10%. Our efforts might be better spent on improving open rate in this case, as all other things are equal. Let’s look at a real-life example using a newsletter I shipped in late April. Wow, it looks like I have a serious conversion problem! Or do I? Actually, it turns out conversion is one of my strongest metrics when viewed as a percentage of previous. The real problem is open rate. If I can get more people to open my emails, I’ll do better, which means logically I should be investigating things like subject line testing. Once I get open rate fixed, my next destination for optimization is clickthrough rate, to improve on that. Conversion I can safely leave alone for quite some time. Let’s look at one more example. Which campaign did better of these two I sent out? Looking at absolute percentages, you’d have to say Campaign 2 performed better, so I should focus on replicating it. Any manager giving a very quick glance at these numbers would likely agree with you. They would, of course, be wrong. Let’s add percentage of previous in. Suddenly, a deeper truth is revealed: the content of campaign 1 converts MUCH better than the content of campaign 2. I clearly need to adjust open rate and perhaps design of campaign 1 to achieve similar results as campaign 2, but I’d be foolish to throw away campaign 1 in its entirety, as I’d be losing 16% of my conversion rate. If you have a large scale email marketing program, a difference of 16% in conversion rate is likely the difference between a promotion to the corner office and getting fired. Percentage of previous isn’t the absolute be-all, end-all measurement of email marketing, but it’s one few people use and it can more clearly illuminate how various pieces of your email marketing program are working (or not working) and can help you diagnose where you should spend your time and focus better. Try it out with your most recent email campaign results and see if you find some new insights to work on! Source: http://blog.blueskyfactory.com/
This reminds me of multi-step chemical synthesis, where at every step you calculate "yields" and then multiply them to get totals. Good stuff!
I'm all for A/B testing, it's best way to test out new offers or tweek existing ones. However just be careful on how you are splitting up your lists, since some parts may be more active then others, for example if you added you data 2 days ago and then send out an A/b test then depending on how your new data gets added it's possible that the increase in open/clicks are due to the new data and not the tweeked offer. I had one experience with an esp that would send sublist based on the last time they were updated (ie. new data added) which makes no sense to me, since the last thing you want to do is send to your newest data since the likely hood of complaints and bounces is much higher then mailing data that is already being mailed. I was getting killed on my AOL wl ips because of this very reason, it was sending to all the new "added data" and I was getting hammered due to complaints. Therefore, a good question to ask your ESP is : how is new data handled when adding to a sublist and if you pick multiple sublist when deploying, how does each list get deployed? This way you can get a better read on your split testing and moving around data.