To Be or Not to Be: Can Generative AI Change PR?

Screen Shot 2022-05-27 at 9.43.06 AM
By James Paasche

“With great power comes great responsibility” might be a cheesy, popular superhero slogan masquerading as a general philosophy about the ethics of power, but cliches gain their status because there is a kernel of truth in them. And with the recent rise of generative AI tools, like ChatGPT and DALL-E, the hype surrounding the power they offer is partially offset by a conversation about responsibility and ethics. Government agencies like the Federal Trade Commission have even begun offering warnings about AI and its potential ethical implications. 

 

But there’s a common thread throughout all AI conversations, regardless of what problem the tech promises to solve: AI tools are supplementary to the work we do as humans. They are by no means a replacement, but by working in tandem with Generative AI we can uplevel our own performance in terms of both quality and productivity. 

 

For us tech PR professionals who make a living dealing with hype, we are pretty familiar with proclamations that a new tool will change the way we work. To be clear, AI has already changed how PR pros do our jobs – in one obvious example, Google search is powered by language processing technology, a key component of ChatGPT and other language-based AI tools. But for our concerns right now, we want to discuss the hype around Generative AI and what it means for PR. 

 

At its heart, PR is about interpersonal communications. A client tells us all about their new service, the PR pro interprets this information and shares it with a journalist (who in turn shares it with an editor, who then delivers it to an audience) – something akin to the old game of telephone. And we all know how that game goes – the message is transformed many times in the telling. 

 

Generative AI has the potential to change our game of telephone by speeding it up and automating some of the intricate (and let’s be honest, menial) tasks along the way. Auto-generated email responses could answer a reporter question that comes in way after hours. Relentless searches for messaging documents or old client meeting notes could give way to automatic analysis of those documents at our fingertips, saving time, money, and frustration. Using a data bank of older pitches, in combination with data from a reporter’s articles, Generative AI could actually write pitches tailored to those specific reporters. 

 

A big question here, and one repeatedly asked amid the ChatGPT discourse, is should we automate all of these tasks? Generative AI has already answered the “could we” side of the coin. For instance, getting to know a reporter and their style is one of the more difficult, but most important, tasks we face as PR pros. It takes time. Could we use generative AI to speed up this process, give us thorough data on the tendencies of a reporter, then tailor our pitches based on this data? The answer is a resounding yes. But should we? It’s complicated. 

 

Because of the vast amount of data we could possibly take in, having an AI buddy to sort it for us seems like an easy yes. But the problem here is that these tools don’t necessarily understand context, and they’re based on a large-scale scraping of available data on the internet. ChatGPT can tell me what John Smith at TechCrunch has written, but it can’t tell me where he might be going, or what he’s hinted at in his articles or Twitter feed that might show us the path forward. And as tools are built to help judge whether something has been written by AI, John Smith and most reporters know when they’re receiving a half-baked message – whether it’s from a person or a robot. We still need human discernment. 

 

While breaking down that vast treasure trove of data is incredibly useful, the biases and lack of judgment that Generative AI tools possess cannot be merely pushed to the side. Misinformation is a big issue in PR, and AI will pull information from all realms of the internet, not just our trusted sources – another instance where we need human discernment and common sense. AI tools have also historically reproduced biases, relied on misinformation and, often, just made things up. We know that we can’t trust everything we read, but ChatGPT doesn’t know that. 

 

No matter the industry you work in, if there is going to be wide scale adoption of Generative AI, improvements must be made to make these tools more trustworthy. Some PR firms, like Edelman, Stagwell, Weber Shandwick, and Boathouse have used AI to create their own tools to “measure public sentiment (about brands) or target relevant journalists.” But like any tool, its use ultimately comes down to the user and how they choose to employ it. Furthermore, we can’t solely rely on AI to do our jobs for us. Instead, we must work together in tandem. 

 

While synthesizing data and speeding up menial processes is no doubt valuable, the judgment needed to make AI a reliable partner is the responsibility of those who want to harness its power.