Few events, if any, draw more eyeballs than the Super Bowl. So when this year’s big game aired on February 3, the 100 million-plus U.S. viewers could not avoid noticing:
1. A sixth championship for the New England Patriots.
2. The way-more-than-six tattoos adorning the body of a certain halftime-performing pop star.
3. Numerous commercials playing into the public’s anxiety about technology and AI.
This third observation provided a perfect jumping-off point as industry leaders (representing brands such as IBM, Uber, Pitney Bowes, and more) joined Omnicom Public Relations Group (OPRG) and PRWeek for a wide-ranging roundtable discussion about AI and the numerous comms challenges associated with it.
To set up that conversation, the event kicked off with a presentation focused on the findings of a recent study conducted by OPRG’s AI Impact Group, the inaugural AI Risk Index. Various stakeholder groups were surveyed to pinpoint pain points, while industry sectors and brands within those were studied to gauge their AI preparedness and vulnerability.
CAUSES OF ANXIETY
When asked about potential risks associated with AI, 59% of consumers admitted concern. While job loss was cited most often, data privacy and security, safety of AI-powered machines, and lack of human customer support/service all scored highly.
"Consumers are more fearful than excited about AI," explains Andrew Koneschusky, partner, CLS Strategies and one of the presenters of the Index’s results. "And those concerns extend well beyond job loss."
Consumers want more government regulation, but even that’s a slippery slope.
"Policymakers don’t always understand the things they regulate," he warns. "Take the Senate hearings with Mark Zuckerberg. (Former) Sen. Orrin Hatch asked, ‘How do you sustain a business model in which users don’t pay for your service?’ Zuckerberg replied, ‘Senator, we run ads.’ If you can’t understand the business model behind Facebook, try wrapping your head around something like AI. That’s why there’s concern when we start talking about regulation."
Job loss is far from the only fear consumers have about AI, says Koneschusky.
Job loss and the lack of human customer support/service concerned employees more than it did consumers. And while both groups seemed equally nervous about safety issues, Koneschusky notes a particular comms challenge.
"There’s less tolerance for risk when machines are in charge, as opposed to humans," he explains. "That’s not necessarily fair, but this is the context all communicators must work through."
In addition to consumers and employees, policymakers, think tanks and advocacy groups, and analysts were also part of the study. When their opinions were surveyed, matters such as national security concerns, potential ethical biases in algorithms, and liability were all prominently mentioned.
"Analysts also feel there’s too much hype," reports Koneschusky. "There’s a need to demystify AI and communicate about it in a way people can actually grasp."
The AI Risk Index also studied 26 large companies in three industry sectors (eight each in transportation and retail, 10 in manufacturing), assessing them against four pillars to determine each brand’s vulnerability and preparedness for AI. These pillars are: traditional and social media buzz; consumer perception; employee perception; and company positioning. Each pillar was scored 0-100 and each company is multiplied by an industry multiplier based on their respective industry.
Important to note that the latter of the four pillars reflects proactive brand activity, while the former three reflect perceptions of external audiences about those brands’ AI prowess.
As sectors, transportation scored 55.3, manufacturing 49.3, retail 44.
To put the above scores in perspective, Mary Elizabeth Germaine, partner/MD, global research and analytics at Ketchum, introduced two unidentified brands as aspirational benchmarks. These two entities, which operate across the three sectors surveyed, had overall scores of 71.5 and 70.7, respectively.
"Even high-scoring companies are well behind those aspirational benchmarks," she notes. "But even with those two brands, there’s a definite gap between the recognition they get from various audiences and what they are actually saying."
To illustrate that point, one of the "aspirational" brands scored 94 in company positioning, but between 54 and 58.8 in the other three pillars. The second scored 100 on company positioning, 50.3 to 57 in the other three pillars.
"Even with the best brands," explains Germaine, "there are still things to be learned." And if those companies have so much to learn, clearly other brands do.
In the retail sector, the best-performing brand scored 52.6. It rated highest on company positioning, but not nearly as well on external recognition.
"The challenges companies face become so apparent," says Germaine. "How do you corral all this data and figure out the right communications strategy so you’re telling your own story, controlling your own message, and getting credit for the things you’re doing? [Even the index’s best-scoring brands] are not."
Germaine concludes by highlighting a key lesson to be learned from the lowest-scoring retail brand (27.8), which scored 5.5 in company positioning.
"Your agenda might be ‘We’re not doing anything,’" she concludes. "You still need to talk about why, from a business-operations perspective, you made that decision. Otherwise, you will be irrelevant."
Even some brands ahead of the AI curve struggle to get recognition for it, notes Germaine.
BEYOND THE NUMBERS
While the Index focused on certain sectors and companies, there are lessons to be learned for all in terms of what would make any brand particularly vulnerable to risks associated with AI.
"The reputational risks from gender and racial bias, in particular, can be catastrophic to brands in the #MeToo era and current political climate," warns Koneschusky. "Racism or sexism in AI could erode consumer trust, undermine employee confidence, and tank a stock price, especially for brands that champion diversity and inclusion. If this isn’t keeping you up at night, it should."
The study results, however, do place the retail sector as the most vulnerable to and least prepared for AI-related reputational risks. Koneschusky offers an explanation.
"Specific retailers doing very little in the AI space are driving the overall retail industry score lower," he adds. "In addition, there’s more stakeholder recognition of the impact of AI in retail versus other industries. Transportation companies, on the other hand, tend to have higher scores due to better company positioning around AI, which includes not only sharing more AI-related news, but also investing more in AI technologies, hiring employees with AI experience, and acquiring AI companies."
Even within the retail sector, some brands scored lower than others, equating to more risk exposure.
"The lowest scoring brands let others own their AI narrative," offers Koneschusky. "Some of the higher-scoring companies are pushing a narrative, but external audiences aren’t actually giving them the recognition. What they are currently saying isn’t necessarily resonating. Their message may be falling on deaf ears."
The bottom line, as captured by Koneschusky: "No industry is fully prepared."
INDUSTRY LEADERS: INITIAL IMPRESSIONS
–Sheryl Battles, VP of comms and diversity strategy, Pitney Bowes
–Bonin Bough, founder and chief growth officer, Bonin Ventures
–Matt Caruso, director, corporate comms, corporate reputations, and digital engagement, KPMG
–Saswato Das, VP, corporate comms, IBM
–Mary Elizabeth Germaine, partner/MD, global research and analytics, Ketchum – AI Impact Group
–Carey Hennigar, global VP, risk and reputation, Storyful
–Andrew Koneschusky, partner, CLS Strategies – AI Impact Group
–Laura Nelson, CCO, Nielsen
–Michael Neuwirth, senior director, external comms, Danone North America
–Andrew Taylor, VP of PR, Hudson's Bay Company (HBC)
–Matt Wing, head of comms for advanced technologies, Uber
Logic can be a powerful comms tool to combat AI concerns, explains Wing.
The AI Risk Index presentation was followed by an 11-person roundtable that delved deeper into concerns flagged by the study and uncovered additional – and sometimes surprising – sources of anxiety. Participants offered counsel on how to craft effective messages to combat those fears.
To solve a problem, you must first recognize the problem. The convened leaders offered insights that proved them adept at both. Below we share some of our panelists’ key takeaways.
•BIG GAME SPOTLIGHTS BIG COMMS CHALLENGES
"The 2019 Super Bowl Ads are a case study in technological dread" (New Yorker headline, February 2). "Robots’ lead role in Super Bowl ads hint at tech anxieties" (Wall Street Journal headline, February 4).
Michael Neuwirth, Danone North America: Such spots only feed into xenophobia. There is a fundamental fear and lack of understanding at play with AI. Anthropomorphizing a robot so that it becomes friendly only accentuates the issue.
Matt Caruso, KPMG: Today’s workers don’t want to have The Jetsons at home and The Flintstones at work. So at some point we have to figure out how we’re going to use – and communicate about – these technologies in a really impactful way to get people comfortable with how they help us do our jobs better. They are certainly making our home lives easier.
Furthermore, a really good tweet came out right before the Super Bowl. It noted all the things that didn’t exist the first time Tom Brady won a championship 17 years ago. All these disruptive technologies and devices we can’t live without now. At some point, AI is going to be that. We need to get over the fear.
Mary Elizabeth Germaine (Ketchum): The messages that come out have to be about simplified benefits of what you as an employee get, what you as a consumer get. Too much messaging around AI is spoken in company speak, not consumer or employee speak. If they can’t understand what you’re saying, they won’t trust anything you say.
Saswato Das, IBM: The general populous is confusing the AI we have now with the AI of science fiction that portrays robots as a force for evil. What we have today is "narrow AI" – algorithms written by humans to help them make better decisions, which has obvious benefits in business and life. As communicators, we need to better explain this and show how AI frees up humans to do what they do best, which is creative.
History also shows us that sometimes these fears are not well founded. Take the ATM machine. Bank tellers were certain their jobs would go away. Forty years later, there are more bank tellers than before the advent of the ATM. These machines freed up tellers to focus on sales and marketing.
Carey Hennigar, Storyful: You have to be transparent and acknowledge the risks of AI, then talk about how your company is actually embracing them or what they’re doing to solve for them. That’s the only way to establish trust.
Companies would be well served talking more about the people actually programming the AI. Assurances to nervous audiences could be provided there.
Bonin Bough, Bonin Ventures: What needs to align is not just the communications, but businesses’ consciousness about how they actually roll out AI and technology into the marketplace. AI does not just provide better products, but also potentially more job creation. That is a piece that’s not really focused upon.
But there’s actually an interesting, deeper fundamental human fear. It’s not that they’re going to take over, but are they better than us?
Matt Wing, Uber: Should we care? Cars drive faster than we can run. Should we fear them, hate them because of that? Our challenge is how to bring logic into this equation with humans. We need to help them understand that some of the things they’re scared of are about their own personal anxieties. They aren’t actually related to machines.
Andrew Koneschusky, CLS Strategies: In and of itself, AI is this amorphous concept. Grounding comms in use cases and in the applications can help bring some tangibility to the intangible. If people don’t understand something, they are scared of it. Intrinsically, confusing terminology has a more fearful context.
Younger generations' tech familiarity can make them suspicious of AI, offers Battles.
•YOUNGER AUDIENCES POSE UNIQUE OBSTACLES
An argument could be made that younger generations who grew up in the digital age are not nearly as fearful about AI, thus alleviating some of the messaging obstacles. Not so fast…..
Mary Elizabeth Germaine (Ketchum): You see younger consumers begin to push back a bit more on the use of technology. Increasingly, you see them doing 30-day challenges where they’re not using their phones. They’re definitely more accepting of technology, but there is a resistance and a sentiment of overreliance on technologies such as AI.
Sheryl Battles, Pitney Bowes: Younger generations really understands how tech, AI, works. And that familiarity can make them deeply suspicious. In the case of this generation, it’s a very interesting perspective.
•ETHICS, DATA, AND DIVERSITY ENTER THE CONVERSATION
Laura Nelson, Nielsen: For us the biggest thing is getting the best, most representative, most granular data we can so that we can represent the truth to the marketplace. That can be hard. But our focus is really about the data that goes into the machine learning or AI we program. It needs to take into account socioeconomic racial issues. The data that goes in is the foundation, so it has to be top notch.
Matt Wing, Uber: The truth is, when you think of AI from an ethics perspective, I don’t think consumers or the public will ever be fully satisfied. And that’s fine. I don’t think they should be. That tension forces companies to actually change, not just talk about changing. And as a communications person, you’re often the one fighting to make sure your company does the right thing. You can’t assume the system at large will always intrinsically do it. It won’t.
Michael Neuwirth, Danone North America: It’s not just about the benefits of AI, but how AI impacts the way companies treat employees, especially when AI is implemented in a manner in which people will be working directly with it. Remember, it’s people who make the decisions about when and how to implement AI, not the other way around.
Sheryl Battles, Pitney Bowes: When you talk about ethics and bias concerns, it’s so important to have people behind AI who represent diversity and inclusion. Facial recognition software, for example, is not nearly as effective on darker skin. So if half the population’s perspective and life experiences are not involved in creating AI and programming, we’re going to miss some key things.