PRWeek: We've made progress in social media measurement and monitoring, but there's still a long way to go. What's the next step?
Jeff Catlin: The next major step is the improved handling and interpretation of short content, like that seen on Twitter. While it's undeniable that the lack of grammar and creative spelling on forums like Twitter represents a major hurdle for content processing systems, it should be noted that the brevity of the content presents some unique opportunities as well. For example, the brevity of a Twitter post means that the topic of the post is generally easier to discern. Additionally, if there is tone in a post and you understand the lingo, it's easy to extract it. The tone tends to be very concrete as there isn't the politeness of traditional media.
PRWeek: Once you've gathered information about social media, what should organizations be doing with it?
Catlin: Brand managers can no longer tightly control the message around their products, and can no longer perform an interpretive dance on the meaning of the feedback they receive. Companies are now forced to take an honest look at what people are saying and make sure that it's the crowd speaking and not a few squeaky wheels; only then can the company effectively engage communities to explain and address their concerns. Basically, the next step is to “unlearn” the lessons of traditional PR measurement and simply listen to what people are telling you and act accordingly.
PRWeek: Last week on the Lexalytics' blog, there was an entry about a panel discussion on accuracy in text and sentiment analysis software. According to the post, one panelist, Chris Bowman, former superintendent at the Lafourche Parish School Board, said, "If someone gave you an 85% chance you'd hit the lottery tonight, you'd take it." What is an acceptable level of accuracy for text and sentiment analysis?
Catlin: The required accuracy in sentiment varies tremendously with the application. For example, the financial services industry is concerned about the aggregate tone for an equity or financial sector rather than the tone of a specific article. In this scenario then, if you're 85% accurate at an article level, you'll be close to 100% on the aggregate trend across a block of stories.
However, brand managers tend to focus on the individual stories, so an 85% accuracy level is often viewed as insufficient. Every now and then a story is scored incorrectly and since the aggregate for a product or brand isn't how success is measured, the mistakes are viewed as a serious flaw. My belief is that over time the PR industry will begin to look at the aggregate statistics and when they do so, automated sentiment will be a perfect fit because it's consistent and accurate across large blocks of content.