Subjective Digital Marketing Analytics
Metrics are only as good as the analysis. This seems obvious but many trainers’, consultants’, and professional coaches’ marketing campaigns revolve around putting the best face on the results. Rather than spending time “putting lipstick on a pig”, really look at the data to see what channels and efforts are generating results . . . and which aren’t. If you find a particular digital marketing element is lacking, even if it’s one you personally love, implement some changes that are likely to improve it rather than tampering with the data to make it appear better than it is.
Years ago we discussed whether an all-inclusive quarterly newsletter was the best strategy for an email campaign for our client or whether breaking the content into smaller chunks and doing more frequent communication would be more advantageous. We were proposing the idea of less content more frequently because there was a downward trend in the opens and clicks on the newsletter. We wanted to test whether the shorter communications would be more engaging.
Using the email downtrend reports as evidence, we proposed the less content more frequency model. The marketing coordinator (who loved gathering and editing the articles into a publication) rejected the idea flat out. “People love our quarterly newsletter, changing it is a bad idea,” she claimed. In reviewing the numbers, she insisted that the data from the email report was incorrect and pulled up her own report that showed numbers at 60% more than the email platform showed. After going through the reports, we realized that her report came from the website rather than the email marketing platform and showed a second jump in hits and several small increases missing from the email report.
As we discussed the differences in the reports, the marketing coordinator revealed that she posted each of the individual articles to social media individually (which resulted in the series of smaller hits) and that the newsletter was redistributed by an association (which resulted in the second jump in hits).
So the real problem we faced was compiling the reports into a single campaign report and analyzing which channels were doing the best. The expanded data, especially the social media data, further suggested that breaking the newsletter into smaller more focused communications and distributing more frequently would be advantageous.
This is where subjectivity and her love of the quarterly newsletter tempted the marketing coordinator to dress up the data rather than improve the marketing campaign. Her suggestion to combine the reports was to use the email platforms tracking links in the social media posts and in the association redistribution so that all results went to the email platforms report. “After all,” she said, “It doesn’t really matter where they came from, just so long as they are reading the articles.”
Her proposal was to artificially inflate the effectiveness of the email newsletter rather than test ideas that might actually increase its effectiveness. And the reason for it was completely subjective, “Our subscribers want a newsletter with meat. It should be like a magazine, not just a single article. It won’t feel special if we publish it more often.”
Thankfully the marketing coordinator did agree to test a more frequent focused communication and the email marketing metrics improved as well as the other channels from the revamped publishing schedule. Let the metrics tell their story, don’t try to skew them to meet your preferences, and then objectively make a plan from that.