In a previous post, I said that email testing didn’t have to be a monumental task for smaller lists. While that is true, the statement shouldn’t be taken to mean it is easy. Detailed analysis is necessary to get a true picture of how your campaigns are running. An integrated set of reports that takes all of your online initiatives into account is critical to make sound decisions on how to improve your metrics.
As a general rule a complete understanding of your online campaigns hinges on knowing how the numbers affect the bottom line. Here is a real life example.
Company X was running an email campaign and were fairly diligent about reviewing their results. Over the course of a few months they modified their emails and found that their open rate improved by 10% and their click rate improved by 2%. They were thrilled with the results and made the changes permanent.
For about a year after making the changes they saw decreased conversions. Fretting over the trend, they decided to go through a full campaign analysis.
I won’t describe the specific situation but as a generic idea, but here is a genericized comparison. They sent an email to a list with a revised subject line that said fill out a simple form and get $100 (a great offer). The copy was tweaked to make filling out the form a singular focus. The email generated recipient interest and open and click rate sky rocket. Then recipients were directed to a form that said, “Only available to 10-year-old’s from Peru” (It only applied to a small subset of their list). The conversion rate plummeted because they were getting clicks but it was coming from poorly suited prospects.
The in depth analysis revealed that while the email numbers improved, the landing page conversion plummeted by 50%. After understanding that their average lead was worth about four thousand dollars, they estimated that their “improvement” had cost almost one-hundred thousand dollars.
A big picture is critical while testing online campaigns. Making decisions on segments of data might improve that area but could cost a lot overall.
Site owners often tell me things like, “Users are going to love this feature” or “This tool is perfect for what our visitors should be doing.” My response is usually, “Is that what testing has shown?” The reason I ask this question is because many site owners make decisions on gut feel. After making the gut call, many of them will lament/blame, “Users are really missing the boat with this, here’s all the great things they could be doing . . .” Your users are not you, so don’t presume they feel just like you. Do some testing to ensure that a feature or tool you are developing is something users desire.
Doing a short reality check on how well your presumptions match up with user needs is worth the effort. In a recent conversation with a site owner, he was complaining about an event matrix tool that he had launched for his users to track events of interest related to his site’s content. He was sure that every user would want to use it. After spending significant time and energy, he discovered very few users had an interest. He could have saved some time and/or developed a more desirable tool if he had done a reality check before investing in the tool.
Testing doesn’t have to be a giant undertaking, though for large sites or in depth campaigns it needs to be thoroughly planned. For smaller sites it is less in depth. Testing can be a sample of people that visit your site and provide feedback on how they use the site and what they’d like to see. It can also be a user test session where a person uses the site and the site owner observes how and what they use. This is sometimes more valuable, as actions will speak louder than words.
Here are the primary things to look for from the tests when deciding if the feature you feel is great, actually cuts the mustard with users:
- Navigation – A great tool is worthless if people can’t find it.
- Usability – Users have to be able to easily use the feature or tool. Make sure it is intuitive so that users will stick with it and get the maximum benefit.
- Functionality – The feature or tool better do what you claim it will. Setting expectations that aren’t met will harbor resentment.
- Communication – You won’t have a lot of time to highlight your feature or tool using online communications. Spend some time boiling it down to its most basic benefits so you can concisely generate interest.