Fake documentaries like Plandemic and groups such as QAnon have made it clear to Americans that conspiracy theories are widespread. But the problem isn't limited to public health or governance. Misinformation is a burden for consumer brands, too.
Or, as Weber Shandwick chief innovation officer Chris Perry, puts it, disinformation and misinformation are "as old as media itself, but in our current landscape, they can evolve and become amplified more readily than ever before."
Nate Jaffee, SVP of integrated strategy at Praytell, agrees, noting that disinformation is especially dangerous as the public loses trust in organizations.
"Disinformation has been a major concern in 2020," he says. "Trust in institutions was already on the decline before the pandemic, so in the face of a crisis, brands are more and more stepping in to fill the void of accurate information and guidance."
A major part of the challenge for brands is just how widespread dis- and misinformation and fake news are.
"People consume and share it more than the truth," says Yannis Kotziagkiaouridis, global chief data and analytics officer at Edelman. "Systems that host this information benefit from high traffic and shares because they can then generate more ad revenue. And the cycle repeats."
In other words, brands can become the sacrificial lamb of the attention-traffic-ad-revenue cycle. Yet at the same time, it is not definitionally detrimental. When we think of "fake" content, deep fakes or the impersonation of an individual or brand come to mind. But it's not always about spreading discord.
"'Fake' content is everywhere, and it can be used for entertainment or to sow negative sentiment," Perry explains. "Any time you use a lens on Snapchat or Instagram to change your voice or face, technically that is fake content."
Nevertheless, PR pros have to stay abreast of any and all misleading or false content that is being shared about or on behalf of their brands. But the good news is that doing this requires many of the same monitoring and measuring skills they already have.
Edelman uses a portfolio of tools to identify whether information in itself is fake, but also to evaluate content, the person or entity that generated it and the narrative or agenda of the entity. From there, Kotziagkiaouridis notes that the firm uses the same tools to understand reach, awareness and impact of this false content as it would any other type.
"The primary difference is that we know through our ability to look at benchmarking that if what happens is polarizing or controversial, it moves at different speeds," he adds. "We may use the same tools, but the speed and impact is different, and we must bring that understanding into the mix. Particularly during the pandemic, we've seen that disinformation moves 10 times faster than the truth."
Accounting for this speed requires vigilance. "Brands need to constantly monitor so they can have the opportunity to strategically address misinformation early, before it becomes repeated so often that it is seen as true," Perry says.
Weber also uses a range of media forensics tools and processes to better understand the type of disinformation, the speed at which it's being shared and motive. The technology at its disposal enables the firm to understand conversations and sentiment in real time. The Interpublic Group agency has also increased its ability for network dynamics, which can amplify false content in "record time," says Perry.
Jaffee tracks trends over time at Praytell to have a better sense of the landscape, and to be better able to detect anything out of the ordinary.
"We've been able to isolate misinformation based on certain patterns, keywords or clues found in accounts, and reach can be estimated based on audience size and engagements," he says.
Measuring impact is a bit more challenging, as it includes tracking how it influences consumer sentiment and action over time. An additional hurdle is the incongruity of data from different sources.
"Twitter offers the most complete view, but how many private Instagram accounts or Facebook groups are sharing misinformation?" asks Jaffee. "In some cases, we may not know how bad the misinformation is until it has spread to a critical mass."
John Gillooly, SVP of data and analytics at Hill+Knowlton Strategies, has a similar approach.
"We are constantly collecting data on a large set of topics to understand what information is floating around out there and then working closely with account teams with domain expertise to understand how it might collide with our clients," he says.
Once misinformation has been identified, the next step is to determine whether people are actually buying it. At H+K, staffers test across datasets to better understand this.
"If some information is being heavily circulated, is it also being searched for more? Survey data can be another opportunity to cross-check what is a genuine belief," Gillooly adds.
So you've identified mis- or disinformation targeting your brand or client. Then what? In some cases, nothing.
"For example, if the false narrative is promulgated by a non-influential source and isn't gaining traction, no action may be the right action," Perry says. "In fact, responding might amplify the disinformation and drive credibility and reach that wasn't initially present."
It is important to identify who has been exposed and respond within those sub-groups, but Kotziagkiaouridis agrees that ensuring that those who may not have heard of the issue don't suddenly get introduced to it in the course of your brand's response is important.
He also advocates proactively identifying groups that may be more susceptible to disinformation.
"If you give them the facts before they've been exposed, they're less likely to believe the false narratives," he says. "Look at these vulnerable populations and proactively create a narrative—a narrative of truth."
Another proactive approach is to develop trust between a brand and its stakeholders.
"If your brand is trusted by the community of people that interacts with it, from investors and consumers to policymakers, if you've built trust as a matter of everyday practice, in a way you have an armor that protects you when misinformation arrives," Kotziagkiaouridis says.
The other component is to lean on brand advocates, arming them with the truth and sending them out with correct information to communicate directly with the targeted consumer, including those who may be at the highest risk of exposure to misinformation.
"From there, follow the trajectory of the misinformation after all of these corrective actions and track the volume, share, amplification, who is and isn't talking about it and consumer sentiment using qualitative work," Kotziagkiaouridis continues.
The media can play a similar role to brand advocates. "Trusted media sources remain the most effective antidote to share accurate information with broad reach," argues Jaffee. "We also invest a lot of time on social, correcting misinformation and often building full-scale campaigns to spread the correct message."
Ultimately, however, potential crises and disinformation need to be monitored and managed daily. In the current landscape, it's not a question of if but when your brand may be targeted.
And if you can see it, it's probably too late, Perry says: "Like a weed, it's hard to root out once it's already grown roots."