For Google News

With AI and Workforce Automation Entering Brand Communications, How Do We Keep Content Ethical?

By Lauren McMenemy on July 25, 2018

So much of the discussion around AI has focused on us humans: Are the robots here to take our jobs? How will workforce automation help us do our jobs better? Yet it seems no one spared a thought for the poor algorithms.

Bear with me here. Automation is hugely important; the fact that we can now check our grammar, or automate our research, or set up a drip campaign, and know that our sales funnel is being fed, has transformed marketing and content creation. Artificial intelligence in marketing helps us focus on other demands of the job-the more strategic, analytical, preparatory pieces that will truly take our careers and companies to the next level. It's pushing content creators to the next level of professional development, helping us hone our skills and build on the future. This advancement in technology is nothing short of revolutionary.

Yet workforce automation doesn't happen without AI, and AI doesn't happen on its own. While it's exciting (and perhaps frightening) to envision fully-functional and independent robotic assistants, AI doesn't just sit there and do exactly what we want it to.

If we want the most functional AI possible, we need to train it-it's called machine learning for a reason. In order to train it, we need to make some really big decisions concerning the future of the global economy, reputation management, and damage control. No AI can be unleashed on this world and expect to remain pure and unaffected by human behavior-just ask Microsoft's Tay, an innocent "teenage" chat bot turned into a sex-mad, conspiracy-theorist racist in 24 hours by Twitter users.

So what can we do to ensure workforce automation doesn't turn into workforce tantrums? This year, The UK's Chartered Institute of Public Relations (CIPR) set up a panel to look specifically at the impact AI will have on the PR and communications industry.

Former CIPR leader Stephen Waddington, chief engagement officer at communications consultancy Ketchum, found an increasing number of vendors trying to sell tools to the market, but when he spoke with practitioners, the whole thing wasn't very well understood-the industry was polarized between complete denial, fear, and panic. To address these misgivings, Waddington set up and now chairs the CIPR's #AIinPR panel to explore the issue and try to characterize the potential impact of AI on the communication industries.

Relax: We Still Need Humans

First on the agenda was an assessment of those tools that had begun to flood the market-not just robot writers, but tools that could be applied to media relations, influencer relations, social media, community management, and more. They found in most tools the actual AI application was limited. According to Waddington, "AI is being used like blockchain as a veneer to sell stuff to a market that doesn't understand what it's buying."

Next, Waddington asked Jean Valin, a hugely experienced communicator, to assess a PR competency framework and skill set, benchmarking technology against skills through crowdsourcing intelligence. The result of that research, Humans Still Needed, was released by the CIPR last month, and, thankfully, it's not the nightmare fuel you may expect. The study found that AI is currently involved in about 12 percent of what the PR industry does, but that's likely to rise to around 40 percent in the next five years as technology gets smarter and more ingrained in our daily lives.

"Think about what you do in your day-to-day work and the amount of drudgery and repetitive chores that you do that could be automated," Waddington says, echoing much of the positive perspectives you hear in AI discussions. "So much of the PR business is based around using databases to understand relationships; it's using tools to listen to conversations or to monitor them. Technology is cutting across all of that and, as a result, enabling us to have better conversations, better understanding, better pitches. We're becoming more sophisticated in what we do."

He's the first to admit that "not all people are going to take that opportunity" and that there will still be those on all sides of the communications divide-from PR to marketing-who will bury their heads in the sand and let the world unfold around them. Waddington wants to make sure he's at the forefront of that new world.

EQ vs. IQ: A Battle Against the Machines?

Person in black surrounded by neon

Image attribution: Drew Graham

But, he asks, "Where do we go? There's the whole area of white space around ethics, professional guidance around mentoring, negotiations, soft skills. Empathy, emotional intelligence-it's the EQ type skills where humans are at the forefront, and the harder skills are where tech is having the greater impact. Technology is much better at using large data sets."

Yet these tools are only as good as the data you feed into it. While conducting his research, Valin decided to test the tech and asked an AI to write his report; he got just two lines back taken from a two-year-old blog.

"Some of the data will be subject to the echo chamber," Valin says. "The danger there is it feeds you back only what you want to hear. The danger is bad input-fake news, incorrect information. I don't have a level of confidence AI would give me clean data."

Valin says it's this very theory that led him to his paper's title: "Humans Still Needed." He's not convinced artificial intelligence in marketing should be 100 percent trusted with ethical decisions, raising the example of the Bell Pottinger scandal. The South African agency ran a secret campaign to stir up racial tension on behalf of a billionaire client. Valin doubts an ethical AI tool would have helped Bell Pottinger make a different decision or avoided the resulting backlash. Those choices all came down to human judgment-which as we all know by now can become corrupt and even immoral.

If AI is using data input by humans, you can't fully trust it will make the right decisions or create the most appropriate content for you. Humans are indeed still needed to make decisions based on larger, less easily-definable qualities such as tone, sarcasm, empathy, and EQ, says Valin.

"If you overly rely on machines, you'll jump to conclusions," he says. "Each machine has quirks and quibbles, and if you don't understand how the tool is built, what it analyzes, you'll make mistakes. And those mistakes can be costly to reputation-costly to your own accuracy, so I think that's why I'm saying you just can't be complacent.

"The tool is there to assist; don't be too quick to take them at face value. We need to be more critical about this."

Questions of Ethics, Morality, and Humanity

It's those ethical considerations that are of the most concern to New Zealand-based PR expert Catherine Arrow, who was one of Valin's reviewers for the research. Like Valin, she's a member of the Global Alliance for Public Relations and Communications Management and is intrinsically involved in looking into the ethical impact of AI on the industry.

"I think one of the things that anybody who's creating content, information, stories-something that will connect with others-has got to be aware of is that at some point in very near future they won't be needed," Arrow says, adding her ethical concerns are less about job losses and more about the impact on the stories we tell.

"If it's a skewed data set and we don't teach the AI well then it becomes discriminatory, it cuts people out of whatever area of engagement they're involved with, and that is really detrimental. Microsoft's Tay is a really tragic example of how a very sensible, innocent, naive AI tool could be corrupted within the space of 24 hours. All of the programming, all of the teaching that's done is going to be dependent on the ethical stance and moral stance of those who are teaching it to work."

Arrow digs deeper into AI than that, looking at the Big Brother aspect of some of the AI features we're currently seeing introduced, such as facial recognition bringing in emotional resonance, and the implications of these tools beyond social engineering. Imagine a world where we don't look at metrics to decide the best time to post content, but instead can use facial recognition to tell if people are ready to receive the information. She says, "Society is just not emotionally mature enough to make sure this stuff isn't being manipulated."

We might not be ready for it yet, but Arrow believes we'll have to deal with this level of AI in the next 12 to 18 months, and that we need to have some strong ethical discussions about what's right and wrong-particularly given the furor over the recent Facebook privacy scandal.

"There has to be some really robust discussion about how it's going to be used, how people's emotional states are going to be protected," she says. "It's not just data protection, but a protection of humanity-to make sure it's not manipulated to sell another fizzy drink."

Training AI in How to Be Human

robot playing piano

Image attribution: Franck V.

Yet all three of these futurists believe humans are very much still needed in this new world of communications. We may not be the ones doing the monitoring, the forecasting, or the research as more and more roles become automated across PR and marketing, but humans will still need to be responsible for reviewing each of these tasks and offering an additional level of transparency. We'll need to train the AI and the chat bots in "our" way of speaking, and we'll need to ensure our audience realizes they're talking to a chat bot, not a human.

As Arrow says, "You can't be prescriptive and say this is how we'll deal with this bit of tech, because those technologies will change so rapidly. It has to be, what's our ethical approach to this as a group of people?"

And we have to be ready for our roles to evolve. A tool like Wordsmith can take data and turn it into a narrative that even the greatest writer would struggle to point to as AI creation. Arrow continues, "AI can take the data from my walk and turn it into a 'well done' message instantly. It can turn that conversation into something appropriate for the person in the setting in some ways far better than we can do. When we're doing it we have organizational parameters, a style latched onto our backs. The algorithm is just given context and deployment; it hasn't got all that organizational baggage of you can't say that. The output will be as creative as we allow it to be."

She continues, "There may be a role continuing for us to inform the creativity, rather than execute it. I think that it's naive to think we're indispensable in that area. Algorithms make decisions all the time, everything they do is a decision, but if we're removed from the picture entirely then the decision-making becomes based on logic and the teachings of the last person who had their hands on the algorithm. A human might say something else, we would see the other option of choice.

"It's that kind of humanity and mercy kindness thinking that we would still need to bring to this environment. But for perhaps the unethical it might be more cost-effective to have logical decisions made than humanitarian ones, and there is the danger.

"It's the relevance. I worry about it becoming content programming-self-selecting based on choice, written to keywords, image produced to discerned desires rather than norms."

Waddington agrees, saying, "Whether we'll actually see a time where a computer is able to take on human empathy and develop a story that will resonate on an emotional level, I don't know. But that's the challenge. Look at the music industry-there are tons of examples of AI developing music scores for orchestras, for bands, but a computer is yet to have a hit."

It's the relationships and ethical practice that will become a key differentiator for storytellers in the future. Our lives are becoming so much easier as AI allows us to automate those dull, repetitive tasks that are so necessary to our roles. And while some have said it's the creativity that will keep humans involved, that too appears to be disappearing.

But we can't let the AI run rampant. If that happens, we risk that they'll develop their own languages and cut us totally out of the picture. Instead, your best course of action as a marketer is to focus on how AI can help you perform smarter and on how you can guide the AI to do that in an ethical and acceptable way.

The robots aren't here to take over, but they need our help so this next phase of the digital revolution works with us rather than against us.

For more stories like this, subscribe to the Content Standard newsletter.

Subscribe to the Content Standard

Featured image attribution: Dominik Scythe


Lauren McMenemy

Lauren is a storyteller. A journalist by trade, she has worked in agencies, in-house and in the media over her 20-year career. She's worked as an editorial strategist and content creator for some of the world's biggest brands, setting up processes and guidelines, advising on planning, auditing content, building loyal audiences, leading social campaigns, writing blogs and flyers and presentations - pretty much handling the stuff with words. She was born in Australia, has resided in London for the last decade, and writes fiction on the side. You’ll often find her grinning like a fool at a rock concert.