Email Marketing: What to Do About a Mistake

A social media post or a webpage can be immediately updated or removed if a mistake is present. Email, however, presents a unique challenge when an error is sent out.  If an email platform has a retract feature, the results are typically spotty at best.  This means that once an email is sent, there’s really no going back.  When a mistake happens it’s usually a scramble to rectify the situation.  A quick analysis and assigning an action plan is critical for an appropriate timely response.

The first step to dealing with an email marketing error is to stay calm.  Analysis of the error is needed to assign an appropriate response.  When boiled down there are essentially three options when an error occurs:

  1. Ignore It – This is a viable option.  Just don’t make it a default option.  If the error is minor like a misspelled word or grammar error it’s probably best to let it go.  Yes, a few sticklers on your email list might respond but by and large it’s not going to have an impact on your ongoing campaigns. If it’s a small mistake that’s likely to go unnoticed then it’s not worth hitting your lists inbox again.
  2. Targeted resend – If an error affects a subset of your list then a correction needs to be sent only to the affected recipients.  This is often the case if a link is broken or referencing the wrong page.  It can also be the case if you have lists broken into versions (like html layout and text layout).  If you can isolate a group or list then do so.  No sense sending an update to everyone about a broken link.  Rather send the update to the recipients that clicked the link.
  3. Resend – This is the final and most drastic action.  If there is a major problem or the wrong content goes to the wrong audience then a resend is necessary.  Basically if the body of the email contains a significant problem, then everyone who received it will need a replacement.  The resend should be done quickly and include a note in the body or the subject line explaining that this is a corrected version of the flawed email.

Mistakes happen.  Of course reviewing for errors beforehand is the best course of action but every person running an email marketing campaign will have an error go out sooner or later.  The difference between it having a lasting effect and being a barely noticeable issue depends on what the error is and how efficiently the mistake is corrected.

Define Success: Email Marketing

Using generic metrics for gauging success is very common in email marketing.  Specifically the most common gauges are:

  • Open rate
  • Click rate

Don’t misunderstand, these metrics are important but for most email campaigns they should not be the defining factor in measuring success.  Success depends on the intent of the email.

Here’s an example, an email campaign that consists of one communication promoting an event and a second that offers informative tips.  The open rate matters in both cases as it is an indicator of subject line clarity and recipient loyalty.

In fact, for the informative tip the open rate is a good immediate indicator.  However, the rate of the email being forwarded might be a better gauge because it clearly indicates that recipients valued the content.  Even a fairly small percent of forwards is a major victory because this particular metric typically only records a fraction of actual forwarding activity.

For the event promotion opens are a good initial indicator and click through rates are important.  However drilling down to what was clicked on is typically more important.  Were people drawn to a video link, an image, a headline, or a particular hyperlink?  This information is a better gauge for success because it can be linked to who registered for the event as well as inform future communication on what draws the target audience.

Of course this is only a sample and other metrics would be more critical to success in other cases.  The point is that open and click rates are usually a starting point to evaluating a campaign.  They rarely are capable of being a stand-alone gauge for success.

Email and Internet Testing Needs Some Planning

In a previous post, I said that email testing didn’t have to be a monumental task for smaller lists.  While that is true, the statement shouldn’t be taken to mean it is easy.  Detailed analysis is necessary to get a true picture of how your campaigns are running.  An integrated set of reports that takes all of your online initiatives into account is critical to make sound decisions on how to improve your metrics.

As a general rule a complete understanding of your online campaigns hinges on knowing how the numbers affect the bottom line.  Here is a real life example.

Company X was running an email campaign and were fairly diligent about reviewing their results.  Over the course of a few months they modified their emails and found that their open rate improved by 10% and their click rate improved by 2%.  They were thrilled with the results and made the changes permanent.

For about a year after making the changes they saw decreased conversions.  Fretting over the trend, they decided to go through a full campaign analysis.

I won’t describe the specific situation but as a generic idea, but here is a genericized comparison.  They sent an email to a list with a revised subject line that said fill out a simple form and get $100 (a great offer).  The copy was tweaked to make filling out the form a singular focus.  The email generated recipient interest and open and click rate sky rocket.  Then recipients were directed to  a form that said,  “Only available to 10-year-old’s from Peru” (It only applied to a small subset of their list).  The conversion rate plummeted because they were getting clicks but it was coming from poorly suited prospects.

The in depth analysis revealed that while the email numbers improved, the landing page conversion plummeted by 50%.  After understanding that their average lead was worth about four thousand dollars, they estimated that their “improvement” had cost almost one-hundred thousand dollars.

A big picture is critical while testing online campaigns.  Making decisions on segments of data might improve that area but could cost a lot overall.