The need to understand customers and their requirements before investing in the development of products and services is a well established marketing principle.
It’s about "need first, solution second", and the logic of this requires little justification.
Before going live with anything, the product or service is tested and scrutinised - marketers do their homework, commission research and launch to a test market - all to ensure it meets the need it proclaims to.
And yet, in circumstances not related to product and service development, marketers may be failing to apply this principle where it equally applies.
Let’s think metrics. Metrics are the cornerstone on which the success or failure of marketing initiatives are determined.
So it’s critical they are a comprehensive and accurate reflection of the real performance of such initiatives.
In the absence of rigorous metrics, successful programmes are shelved, while investment continues to be ploughed into those of limited value.
With metrics playing such a pivotal role, strict disciplines should be followed when they are conceived and formulated, and later tracked and interpreted.
But does this always happen? Is it the case that our metrics are more dependent on whatever data most readily comes to hand, rather than what is genuinely reflective of success?
Are we in fact forgetting ‘need first’ and lazily creating metrics by forcing meanings onto the most easily captured data?
The more data an organisation generates through its operational processes, the greater the likelihood of falling into this trap.
There’s a temptation towards views like, "We have too much data already, why go to the trouble of getting more?" or "With so much data, surely the answer is in there somewhere?"
Maybe, but maybe not. Yes, internal sources should be mined for all meaningful and usable customer intelligence, but with the acceptance that this is probably not the whole answer.
Good metrics depend on the relevance, not quantity, of data.
Of all the metrics banded about, ROI is the one marketers come back to time and again.
A typical approach to ROI is to measure the investment going in at the start, then measure the benefit achieved at the end.
Fair enough perhaps, but this can be overly simplistic. Most marketing programmes are continuous, so ‘the start’ and ‘the end’ become nebulous concepts.
One solution is to build metrics specific to performance at key pre-defined stages, which later aggregate to represent the programme as a whole.
Then of course programmes are implemented via multi-channels, so how is return assessed by channel, when we have probably argued in advance that the channels will operate synergistically?
How is the benefit of a ‘greater whole’ attributed to each of the parts to drive a metric for each channel in relation to its cost?
Also return may not be measurable via internal means - uplift in customer satisfaction is often the objective, in which case metrics should be geared towards direct feedback, such as via customer experience surveys or social media monitoring.
Metrics have to be calculated at the right time, and the right time may be ‘not yet’. Web analytics has given rise to a view that returns can be measured in real time or something approaching it.
There’s a danger that technological advances are distracting marketers from their real purpose of generating positive long-term brand value.
This is more difficult to measure and more difficult to set targets against, but actually of greater importance.
Getting metrics right is critical to making the right marketing decisions - decisions that in turn lead to success or failure.
As marketers we should go about defining and using them with the same foresight, planning, and rigour as we would when planning new products or services.
If metrics are built on what’s easy to measure, not what’s important to know, then our marketing programmes will be fatally flawed.
Simon Steel, insight and digital director, Eclipse Marketing
This article was first published on brandrepublic.com