Here’s a common example we might run into.
An industry standard of a metric, like search engine hits to a website, is used as a comparison to gauge the effectiveness of an SEO campaign. Typically a comparison timeframe will account for at least twelve months to account for seasonal or industry shifts that would affect the overall average. An initial comparison can be run for the same twelve month time period to show a direct correlation to find where the SEO campaign falls against standard results. So there is immediate gratification in setting the benchmark.
But then the process slows considerably as changes need made, the data needs compiled, and then progress can be measured. Typically that will take at least a few months. And while smaller sections of time can be used to infer progress, overall improvement can’t be gauged until the average has had time to increase.
This is where impatience can result in false comparisons. The twelve month average will only be improving gradually because the newest months can’t immediately offset previous months. In an effort to prove the effectiveness of the current campaign, a request might be made to compare the current month to the twelve month average. While this helps show the progress it doesn’t account for seasonal or industry based fluctuations.
As an example, many Business to Business companies see a slow down over summer and in late December. Business to consumer companies might see an inverse effect. Certain industries, like health insurance, have a quoting and enrollment period which sees substantially more activity than other times of year.
False comparisons often disregard these fluctuations which can lead to overly positive or negative conclusions. For instance, if a consultant works with school districts that set staff engagements in August only, every other month is going to look lackluster compared to that single month. Even a single anomaly period can skew an average up or down significantly which will offset the accuracy of any time period that does not account for that trend.
Make sure your trending and data analysis is comparing apples to apples and not apples to oranges. Making conclusions on incompatible data degrades the entire purpose of data driven decision making in digital marketing. Have the patience to track gradual improvement, rather than rush to infer more significant shifts that may or may not be accurate.