The two common reactions to artificial intelligence are polar extremes – it’s either a tesseract of manifold wonder, or a black box of dread.
Nothing illustrated this better than the recent panic about bots in Facebook’s AI lab developing their own language and causing spooked engineers to pull the plug on the experiment for fear of dystopian visions of killer AI emerging.
The reality was less fraught than the hyperbole. Bots have developed their own languages for years. The problem was Facebook wanted them to develop in English, but this was beyond the current capability of these particular machines.
Rather than evolving into some Ex Machina-style doomsday scenario, the robots were actually not intelligent enough, hence the pausing of the test.
However, for the uninitiated, every AI breakthrough whips up enough ecstasy among believers and skepticism among detractors that it’s easy to come away from the experience mystified, unsure what AI is and how it will affect you.
But the hype is real and the exponential bounds AI is taking are proven - it’s the technology’s applications that are distorted.
"The biggest misconception is that it solves all problems," says Aaron Shapiro, CEO of IPG-owned digital firm Huge.
Some tech-heads, such as Shapiro, say AI is the harbinger of the third phase of the digital revolution, after the birth of the web and advent of mobile and social. "[PR pros] have to be ready for AI or they won’t be in business 10 years from now."
That doesn’t mean every problem has an AI solution. "It’s not a magical elixir," Shapiro notes. "AI has become such an overused word. Everyone has a definition for it."
AI — broad field of study about how machines execute tasks that require intelligence
Machine learning — field of study that gives computers ability to learn without being explicitly programmed
Neural network — algorithm that models itself on the human brain
Chatbot — computer programs that mimic conversation with people using AI
General intelligence (strong AI) — ability of machines to perform most human activities, a reality experts say is still far off
Narrow intelligence (weak AI) — machine intelligence that takes a narrow goal and improves quality of results by accelerated, autonomous learning; current stage of most AI in media
Artificial super intelligence (singularity) — ability to have acquired senses and sentience beyond human capabilities; think Hal from 2001: A Space Odyssey
Turing Test — one way of determining if machines exhibit a form of intelligence, developed by British mathematician and Enigma code-cracker Alan Turing
PRWeek canvased experts for their definition of AI and the answers fundamentally boiled down to: AI is a broad field of study about how machines can execute tasks that require intelligence.
After AI’s founding in the years following World War II, scientists theorized they could create an "artificial neural network," a type of algorithm that modeled itself on the human brain. This gave birth to the idea of "machine learning," which means programs could learn new tasks without being given new instructions by humans.
Due to a lack of computer power, that approach was eventually abandoned in the 1980s, bar a few rogue pioneer researchers who specialized in deep learning, the field that studies neural networks.
The early 2010s saw the rise of big data, which gave AI the untold thousands of data points it needed to become smarter, as well as the computer power necessary to enable AI to process its vastness. The conditions were set for an AI explosion.
"Big data is the fuel for machine-learning algorithms, where AI is the engine of intelligence," explains Rowan Benecke, chair of Burson-Marsteller’s global technology practice.
This resulted in a proliferation of breakthroughs to the point where the average American interacts with AI dozens of times every day. A Fortune story cites data from CB Insights claiming VC firms invested $5 billion in 658 AI companies in 2016, a 61% year-over-year increase.
Meanwhile, an arms race is underway across the marcomms industry, with agencies investing in AI assistants, data analytics, and even new cognitive staff positions.
Karim Sanjabi was hired as executive director of cognitive solutions at Crossmedia to bring a data-centric approach to advertising for the media agency’s clients.
AI’s core function for PR pros is its ability to process data at immense scale and speed to better inform strategy and creative work.
"Technology can find patterns in data," Sanjabi says. "It takes a smarter human to say what it means, but the human would never find that pattern because it would require such a massive amount of data calculation."
AI can streamline operations for clients, create new experiences that increase brand affinity, and improve user experiences, explains Gela Fridman, MD of technology at Huge.
AI also adds a new layer of intelligence to customer-website interactions, notes Mike Cearley, global MD of FleishmanHillard’s social and innovation practice. This most often manifests in the form of chat bots because of their adoption rate, ease of use, and pervasiveness, he adds.
In the near future, agencies could use AI to track fake news, a task currently handled manually by fact-checkers and journalists. During Huge’s quarterly hackathon in January, a team produced an algorithm to gauge bias in news articles, Fridman says.
They fed the algorithm articles and it learned about bias and authenticity. CEO Shapiro says he mulled over releasing it as a public service.
"There are a lot of opportunities from a PR perspective in terms of understanding where the content is, how real it is, and where it should be integrated," Fridman adds.
Much of the hype around AI centers on its potential to decimate agency workforces. Positions involving number-crunching, repetition, administration, operations, and junior-level tasks could be eliminated, say experts interviewed by PRWeek.
"Repetition-based tasks or data entry that can be done quicker at scale are going to be fulfilled by technology," says Cearley. "Community management as a function in comms and social media can and should be better automated."
Other jobs at risk include account managers, account directors, business directors, data analysts, and unit managers, "but we’re far away from machines being able to do anything revolving around human-to-human interaction – relationship-building, team-building – or tasks that need a human, emotional, or strategic element," adds Cearley.
While agencies seek out AI tech to maintain their competitive edge, another problem of credibility has surfaced. A few years ago, the tech industry endured "cloud-washing": companies eager to cash in on the cloud craze attempted to sell deceptively branded "cloud solutions."
Now, the tech industry is in a period of AI-washing, says Microsoft comms chief Frank Shaw. Vendors are using AI as a marketing buzzword to sell their products.
"If someone says, ‘I’m using a multi-level DNN [deep neural network],’ you should ask why that matters," Shaw advises. "They won’t be able to explain what a multi-level DNN is, and it really doesn’t matter, in the same way you don’t need to know what processor your phone is running on. You just know it works."
Confronting the ethical challenges of AI is a daily part of the job for Eric Horvitz, director of Microsoft Research Labs. As chair of an advisory panel, he helps provide ethical oversight of Microsoft’s forays into the field.
Horvitz ponders what he calls AI’s "uncertainties beyond the design," or what happens when you release technology made in a laboratory into the world.
For brands, the proposition of AI tools pose thorny moral dilemmas, he adds. It blurs the line between engagement and manipulation.
"Let’s say one company has a goal of maximizing how much time you spend on a timeline," he posits. "Is it ethical to not reveal there’s a model trying to make sure you stay with an application they optimized for you?"
At the heart of this transparency issue are social bots, or Sybil accounts, which are "computer algorithms that automatically produce content and interact with humans on social media." Some estimate millions of social bots and fake accounts inhabit social networks.
"The biggest ethical concern [around social bots] is transparency," says Fleishman’s Cearley. "At the same time, there are concerns in terms of the technology and security of information if we were to collect data. It exposes us and our client to a lot of risk."
Microsoft’s Horvitz says he would support regulation for the AI industry, but advises it’s a diverse computer science made up of myriad disciplines, not "some mysterious blue-green gas that will pour out of the vents when it’s not one thing."
Ethical and regulatory concerns will continue to be a factor but, ultimately, the conundrum of navigating the AI hype for PR pros really boils down to cutting through the white noise to unearth functions that can augment human input and add real value to agency and client communications disciplines.