Generative artificial intelligence is reshaping the world around us. It is a form of AI that can generate new and original text, images, videos and music via prompts. Platforms such as ChatGPT and Midjourney have exploded in popularity practically overnight, amassing millions of users in mere months.
The public relations industry stands at an inflection point regarding generative AI. A study by researchers from the University of Pennsylvania and OpenAI – the company behind ChatGPT –found that public relations specialists were susceptible to the potential transformative effects of generative AI. In Sandpiper’s own AI in the Communications Industry 2023 study, about 1 in 3 respondents expressed that they were worried that AI would replace or reduce their roles. Additionally, communications practitioners were concerned about the potential legal and ethical risks generative AI posed.
Point of (intellectual) authority
The critical risk most survey respondents identified was around legal and ethical issues caused by using generative AI. Most of the challenges concerned accuracy, transparency, accountability, and privacy of the content generated by AI. Let’s look at Intellectual Property (IP). Generative AI can trawl the web, refer to any of the billions of original artwork found online, and generate something that is a patchwork of original art.
That was the case when stock image platform Getty Images took AI art generation platform Stable Diffusion to court over claims that the algorithm copied over 12 million images from its database. 3 artists also filed lawsuits against Stable Diffusion and Midjourney, accusing them of training their algorithms on over 5 billion copyrighted images on the internet. There was neither consent sought nor compensation paid out to the creators of the original images in both cases.
The use of generative AI platforms raises questions about liability and accountability. If a company uses such platforms to generate content that harms individuals or violates laws, who’s responsible? Is it the company, the platform, or its users? Already, AI-generated songs have reached streaming platforms, which affects the earning ability of the platforms, record labels and musicians.
An Orwellian hue
Another ethical concern is the use of generative AI platforms in mis/disinformation, propaganda, or manipulation. They could be used to create fake social media accounts or generate deepfake content to spread disinformation or influence public opinion. Imagine this in the context of politics, where the use of such platforms could undermine democracy and freedom of speech. Case in point - the image of Donald Trump’s arrest. It was AI-generated, but polarised millions on both sides of the aisle. Imagine how that buzz could affect an aspiring political candidate. Using generative AI to disseminate false or misleading information to customers, stakeholders, and the public is not new. It has already happened in a presidential election.
Generative AI can also monitor and analyse customer conversations, including private messages. This raises questions about surveillance, data protection and privacy invasion. Companies must be transparent about their data collection and usage practices and ensure compliance with relevant data protection laws. The Cambridge Analytica and Facebook (now Meta) scandal was powered by AI, which identified and targeted voters that could be “persuaded” to switch sides. Now, imagine if that political campaign was augmented by powerful large language models. These platforms generate content based on data fed into them, including personal data. Such fears also emerged in Sandpiper’s survey, with 63% of respondents finding governance risk a problem.
This is the way
How can our craft learn to navigate this new paradigm whilst maximising the opportunities presented by generative AI?
Corporations and public relations agencies need to establish clear policies and procedures for the use of generative AI. These policies should cover data usage and protection, as well as guidelines for content generation and dissemination. Organisations should establish internal protocols for monitoring and auditing the use of these platforms to ensure that they are used responsibly and ethically. At present, only 11% of global respondents indicated that their companies had policies or guidelines in place for using generative AI tools, and only 35% have plans to do so within the next 12 months.
Considerations for companies when designing an AI policy:
- Offer guidance for employees to safely use (generative) AI within established limits.
- Clarify data usage and ownership, and safeguard company, personal, and client data.
- Set clear procedures to avoid publishing incorrect or harmful content.
- Ensure generative AI implementation includes human oversight – ‘human in the loop’.
- Define roles and consequences for noncompliance or unethical AI use.
- Be regularly assessed and refined based on technological advances and societal changes.
- Be leadership-led and part of the board’s agenda to showcase solid, long-term commitment.
Companies must implement necessary policies and protocols and invest in upskilling staff to thrive in a dynamic workplace. Technology won’t replace talent completely, but talent who know how to leverage technology will. Generative AI could have the same effect the internet initially did. Organisations that embrace it will reap productivity gains, cost reductions and superior quality of output. For the public relations industry as well, successfully leveraging generative AI will imbue a competitive edge in this landscape.
Rob van Alphen is the director of strategy and innovation, Sandpiper. Viewpoints is an article series contributed by members of PRHK, Hong Kong’s PR and communications association.