Many in the communications field have written about how corporate communicators should make more - and better - use of data in their day-to-day work. And the breadth of monitoring and measurement capabilities available to deliver this data has perhaps never been greater.
So why is it many teams are still not relying heavily on data? After all, even teams that are making only cursory use of media measurement data will often admit good data could help them in many areas. For example:
- Learning which aspects of their strategy are working
- Capturing mindshare on key topics from competitors
- Presenting results to management in a persuasive way
The benefits of good data are clear. And yet many teams - including a few making very significant investments in monitoring and measurement - still seem cautious about basing their strategy on it, using it for decision making, or featuring it prominently in their reports to senior executives.
In our meetings with communications executives, the reason many give privately for not using data more aggressively is that no one really trusts their data. Teams commonly share anecdotes about times they discovered errors in the numbers. Or when trends in metrics didn’t match what the team was observing. Or about meetings where executives asked probing questions and the analysis fell apart.
In one typical story, the CCO told us that shortly after buying an expensive measurement tool they stopped paying attention to spiking indicators and alerts the system was generating. They did so because each time the team dug in to understand the warnings, they turned out to be false alarms.
Why are communicators experiencing this shortfall in data quality despite the rapid advance of technology? Software advances are enabling teams to comb through larger volumes of content and to visualize it to quickly explore trends and patterns. But purely automated analysis has key shortcomings.
Here are two critical ways media measurement technology can fail:
1) Noise drowns out the signal. Technology still struggles to identify which content is relevant. If the system you use lets a significant proportion of erroneous or irrelevant hits slip through, this noise will often dilute relevant content so much that metrics become misleading or even random.
How significant is this noise factor? In our client work, we’ve found that after filtering with keywords and Boolean logic:
- 75-90% of remaining traditional media stories can be noise (erroneous, irrelevant, duplicative, etc.)
- 95% or more of remaining social media posts can be noise
Yet the true magnitude of the noise problem is often hidden within large datasets that are time-consuming to review, so customers often don’t notice until the numbers seem "off."
2) Subjects in the content aren’t accurately mapped to things the team cares about. Once the noise is removed and the relevant content is identified, content must be accurately analyzed in a way that ties to your objectives. Simplistic keyword searches and vocabulary analyzers often produce little insight into the actual subjects and sentiments in the content.
One real-world example: A leading waste services company was perplexed by rising negativity in social posts analyzed by a widely used automated monitoring tool. When we examined the data, we learned tweets praising the company’s service level were being marked as negative simply because the word "trash" appeared in them.
Before you can feel comfortable relying on data in your decision making and reporting to top management, you must first trust it. And to trust it, you must have confidence that it’s being thoroughly cleaned (removing noise) and analyzed properly (to connect with your goals). Would you stake your reputation on the quality of the data you have today?