fbpx

The Realities of Using ChatGPT to Write for You – What to Consider When It Comes to Legalities, Reputation, Search and Originality

March 17, 2023

This article was originally published on michellegarrett.com and reshared here with permission from the author.


ChatGPT – who hasn’t been talking about it? The conversations around the implications of this shiny new object have been endless in the marketing realm over the last few months.

What grabbed my attention was a post by someone who had hired a freelance writer to create content – only to find that what the writer had delivered was, in fact, generated by the chatbot.

Alarm bells started going off in my head – how will a client know if a writer is actually writing the content they hired them to write – or using ChatGPT to write for them? Will writing content yourself versus allowing a bot to write it now be a differentiator? What about plagiarism and copyright infringement? Can clients get into legal trouble if they accept content written by a chatbot and publish it as their own? What happens when a consultant feeds proprietary client information into the chatbot?

At the same time, I saw marketers and companies – including some in the C-Suite – all too ready to hop on the bandwagon.

I decided to dig in to do some research. Before you get caught up in the hype, here’s what you should know.

What is ChatGPT?

Before we answer those questions, let’s back up for a second. If you’re not aware of what ChatGPT is, here’s a definition:

“ChatGPT is an artificial intelligence (AI) chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI’s GPT-3 family of large language models and has been fine-tuned using both supervised and reinforcement learning techniques.” ChatGPT is based on narrow AI.

Of course, AI isn’t new. We use other tools that incorporate AI. Gmail, for example, is one that many use. Grammarly is another example. It isn’t the idea of an AI-based tool that had my head spinning. It was more about how a tool like ChatGPT could be misused that kept me awake at night.

As a B2B PR consultant, content creator and writer with a Journalism degree, the art of writing – of putting your ideas down on paper – matters to me. It helps me think and get clarity on any number of subjects. As Seth Godin says, “Writing is a symptom of thinking.”

I also write for clients. I like to think the clients that hire me to write believe that I – as a human with experience – bring something valuable to the table. So when I start to read about “writers” who want to rip off clients by lazily having a bot do the work, it disturbs me on multiple levels. And – it should concern other writers and marketers – and the companies who hire them.

In fairness, I have seen some express this point of view. But there seem to be equally as many who are all too ready to hop on the ChatGPT bandwagon.

To companies that are considering getting rid of real human writers in favor of this new toy, I say – stop. Think. Take a step back to consider the risks:

  1. Legal
  2. Reputational
  3. Search/SEO  
  4. Lack of originality

Let’s walk through each of these.

The Legal Risks of Using ChatGPT to Write For You

Because I’m not a legal expert, I found one to help address those concerns. Ruth Carter (they/them), Evil Genius, Geek Law Firm, GeekLawFirm.com, who frequently speaks about intellectual property and internet law, was happy to answer my questions.

I first asked Carter about plagiarism concerns: As ChatGPT pulls from existing content, what happens if a company or a media outlet publishes something it generates (without fact-checking)? 

“So, there are at least three issues here,” says Carter.

“Issue #1: AI software is often trained by scraping content from the internet. This means the output could be a copy of or a derivative work of someone else’s copyright-protected work. It’s a setup for you and the AI company to be sued for copyright infringement.

I know of at least two copyright lawsuits against AI companies going on right now – they both have to do with images, but the same could occur with text.

Issue #2: AI is bad at correctly stating facts. AI is better used for things like suggesting topics and ideas to explore as the scope of an answer but not actually answering questions correctly or revising statements that you’ve already fact-checked or are merely stating an opinion.

Issue #3: If you’ve read the OpenAI terms (which no one has except me), it says that there’s no guarantee that it would produce the same output for two different users.

Given the limitations of the human brain, if you’re asking ChatGPT for something, you should expect that someone else is putting in the same prompt and getting the same result. This means two sites could be putting out the same content and claiming it as original to them. This isn’t going to help you stand out in the eyes of your audience.”

Further, “ChatGPT doesn’t clearly share its sources when it answers a prompt. Is it using full sentences someone else wrote? Is it paraphrasing someone else’s idea, without attribution? These are all major issues to consider,” said David Ewalt, editor-in-chief of G/O Media’s tech site Gizmodo, when interviewed by Digiday.  

Next, I asked Carter: Are works created by ChatGPT protected by copyright laws? If you use copy generated by the bot, how do you properly credit sources? 

“Even though Chat GPT’s terms say they assign the rights in the output to the user, this is a false statement for one of two reasons:

1. There is no copyright in AI-generated content if a human wasn’t substantially involved.

2. If the input is someone else’s copyright-protected work or someone else’s copyright-protected work was used to create the output, the output is an unauthorized derivative work of that other person’s work, aka copyright infringement.

In terms of how to credit AI-generated content, check the terms of service for the AI software.

And never, ever, ever ask AI to create content ‘in the style of [Artist].’ If you want something written or created in that person’s style, hire them to create it for you.”

Then, I moved on to privacy: What happens if you feed your client’s information into the bot? 

“There is the possibility that employees will share proprietary, confidential, or trade secret information when having ‘conversations’ with ChatGPT,” says this Bloomberg Law piece.

“Why would you ever do that?” asks Carter.

Although Chat GPT can introduce efficiencies in workplace processes, it also presents legal risks for employers. JP Morgan Chase was so concerned about data privacy and ChatGPT that it restricted use of the AI chatbot by its 250,000+ staff members.

“If you are receiving proprietary, confidential, or trade secret information from a client, your contract with them probably includes a provision that says you’ll use ‘reasonable efforts’ or maybe even ‘best efforts’ to keep this information confidential and only share it with those who need it to perform their work tasks,” Carter says. “No one needs ChatGPT to perform their work tasks!”

“Do you know what you just did by sharing confidential client information with ChatGPT? You gave them permission to use it! ChatGPT can use your input and output.

And if ChatGPT gets sued because of what you did, you have to indemnify them and reimburse their attorneys’ fees.”

Say you sign an NDA (non-disclosure agreement) with your client (as I am often asked to do). You’re then asked to write a press release for that client. So, you feed the client’s proprietary information into ChatGPT. It spits out a mediocre draft that you then need to spend time editing anyway – BUT – perhaps more importantly, you’ve now exposed your client to risk because any information you give to ChatGPT is then fodder for EVERYONE ELSE using the chatbot.

If that doesn’t send a chill down your spine, keep reading.

And finally, I asked: I’m sure I’m leaving out other important legal considerations here – please feel free to add anything else you feel communicators should consider when thinking about using ChatGPT. 

“Given that so many AI companies have been sued for copyright infringement, I’m curious to see if any of them will go out of business because they can’t pay their legal bills or because they’ll be forced to shut down or stop using any AI that was trained via copyright infringement, and what impact that will have on companies who fired their marketers in exchange for using AI,” Carter said.

“Likewise, I’m curious to see how many companies will be sued for copyright infringement (or at least get a demand letter) because an AI gave them content that violates a third party’s rights. Also consider your company’s reputational harm if you’re outed as inadvertently stealing from other creators.” (We’ll talk more about reputational harm in a minute).

“ChatGPT and other AI are tools. What matters most is how you use them. Think it all the way through before you decide how to use it in your business, including the worst-case scenarios. And do your homework to see if the AI company is accused of wrongdoing before you use it, and decide how your company will respond if you learn that it is harming other creators.

And definitely verify all facts before you use them. Depending on what you say, you could be accused of anything from being wrong to committing defamation.”

The Reputational Risks of Using ChatGPT to Write for You

Let’s say you skirt the legal ramifications of using ChatGPT to create content you publish as your own. What happens if someone discovers that the content was simply a reworded version of previously published content that belongs to someone else?

What does that say about your reputation? You may not want to take that risk.

Beyond that, there’s the possibility of spreading disinformation. We’re not adequately prepared for the possible outcomes we may face if we use content generated by a tool that can’t discern the difference between a reliable source and an unreliable one.

With no guardrails in place, users are left to their own devices. We see how that’s worked out with other technological advances. Look at the internet and social media as an example.

“AI systems are nowhere near advanced enough to be able to tell the difference between a reliable source and an unreliable source. They’re just not there yet [and] aren’t going to be for a long time. So they’ll pull information from bad sources and repeat it as fact,” Ewalt said in Digiday.

Also, let’s not disregard that early adopters jumping on this trend include NFT enthusiasts. This article in The Verge paints the not-so-pretty picture.

Even Sam Altman, the CEO of Open AI, the company behind ChatGPT, has reservations about how the technology will be used, as reported by Business Insider.

“His thoughts on the worst-case scenario, though, were pretty bleak.

‘The bad case — and I think this is important to say — is, like, lights out for all of us. I’m more worried about an accidental misuse case in the short term.

So I think it’s impossible to overstate the importance of AI safety and alignment work. I would like to see much, much more happening.’”

The SEO Risks of Using ChatGPT to Write for You

What does Google think of all this when it comes to helping or hurting SEO (search engine optimization)?

When Google came out with its helpful content update in Aug. 2022, it said, “We’re launching what we’re calling the ‘helpful content update’ that’s part of a broader effort to ensure people see more original, helpful content written by people, for people, in search results.”

Notice that it says it’s prioritizing more “original, helpful content” written by PEOPLE. Nowhere does it say – “We’re prioritizing content written by a BOT.”

“The biggest limitation is that ChatGPT is unreliable for generating accurate information,” says this Search Engine Journal article that covered what to know before using ChatGPT for SEO. “The reason it’s inaccurate is because the model is only predicting what words should come after the previous word in a sentence in a paragraph on a given topic. It’s not concerned with accuracy.”

Further, this is taken directly from Google Search’s guidance about AI-generated content:

“How will Google Search prevent poor quality AI content from taking over search results?

Poor quality content isn’t a new challenge for Google Search to deal with. We’ve been tackling poor quality content created both by humans and automation for years. We have existing systems to determine the helpfulness of content. Other systems work to elevate original news reporting. Our systems continue to be regularly improved.”

And this from Google’s Spam policies for Google web search: “Spammy automatically generated (or “auto-generated”) content is content that’s been generated programmatically without producing anything original or adding sufficient value; instead, it’s been generated for the primary purpose of manipulating search rankings and not helping users. Examples of spammy auto-generated content include:

  • Text translated by an automated tool without human review or curation before publishing
  • Text generated through automated processes without regard for quality or user experience
  • Text generated from scraping feeds or search results

These all describe what ChatGPT (or the humans using it) does.

While some claim there will be no penalty for using content generated by ChatGPT, time will tell the real story.

The Originality Risks of Using ChatGPT to Write for You

Yet another consideration is the lack of originality that using the tool breeds. Perhaps their slogan should be: “ChatGPT – because we definitely want to sound like every other person out there.”

Those using the tool aren’t bringing any new ideas to the table – because the chatbot pulls from existing content only through 2021. And this is a problem, because we know that sharing original ideas DOES help our content stand out.

Andy Crestodina, speaker, author of Content Chemistry: An Illustrated Handbook for Content Marketing, and founder of Orbit Media, is one marketer who espouses the idea that sharing your original opinion helps you stand out from ALL the rest of the brands out there, saying the same thing.

“Original content is the highest quality,” says Crestodina. “There are literally millions of posts published per day. But most of them get no traction at all. Most of them add nothing new to the conversation. Originality is quality.”

Originality in marketing and in creating content is already a dying art. I recently heard someone allude to the fact that there are no new ideas under the sun – so why not just use ChatGPT to regurgitate some content that’s already out there?

I don’t see it this way. But – we certainly don’t develop any new ideas when we decide to rely on tools to generate our content for us. Instead, if we’re thinking, talking to thought leaders, doing research – and actually WRITING – then we are absolutely going to generate new ideas. It’s the PROCESS that helps us get there.

On this point, a positive note is that those of us who still do our own writing (and research and interviewing) will stand out from the rest. With ChatGPT churning out mediocre content, that leaves plenty of room for content with a fresh take.

“If your B2B brand wants content that summarises other content without getting too specific, AI can do that in a clear, informative way – but it feels a long way from writing content that cuts through,” says David McGuire, creative director of Radix Communications. “And, now that beige, identikit content has just effectively become free to produce, it could be that standout content is about to be more important than ever,”

Some clients may be OK with bot-written content – but many others expect a higher standard, one that won’t expose them to legal risk.

Thinking of Using ChatGPT to Write for You? Exercise Caution First

Before you, too, get caught up in the ChatGPT hype, consider these points.

If helping our businesses and clients succeed is the priority, we should want to do what’s best for them. If creating more crappy content would help them, then we might be able to have a discussion about using a bot to write for you. But – the facts point clearly to the conclusion that this is NOT what they need. If it could:

  • Put them at risk legally
  • Jeopardize their reputation
  • Hurt the searchability of their content
  • Curtail their ability to be seen as a thought leader because they’re sharing no unique opinions

Then what are we even doing?

My advice to those hiring freelancers, consultants or agencies to do any writing or marketing or PR-related work is to ask if they’re using AI-based tools in their process – and if so, how are they using them?

Be sure they’re not feeding your proprietary information into ChatGPT, as it remembers your inputs and makes them available to ANYONE who uses the chatbot, exposing you to risk.

Also, look into a plagiarism checker – Microsoft Word has one, and so does Grammarly [Disclaimer: Author is a Grammarly ambassador]. Several reporters also recommended Copyscape.

If you’re a company using ChatGPT, be sure to fully understand the ramifications legally and reputationally. Consult your legal team so you can be properly prepared. If you jump on this trend without any guidelines in place, it may be a costly mistake.

And I’d advise my fellow freelancers and consultants (and agencies) to be sure to let clients know if, in fact, you’re still doing the writing yourselves. It can help you stand out from the crowd and serve as a differentiator. Because – and I know this is true – there are digital marketing freelancers and agencies out there who are utilizing this technology to create content. Buyer beware.

None of this is to say we shouldn’t understand the role that AI can play in our work. It’s not going away. Depending on your role, AI-based tools can certainly be helpful.

To understand which tools truly leverage AI (versus just using it as a buzzword to bamboozle potential customers), follow Christopher Penn of Trust Insights (@cspenn). To understand the legalities, follow Ruth B. Carter (@rbcarter). Stephen Waddington is also doing some important work to help ensure the ethical use of AI in PR (@wadds). To understand search, follow Jennifer Slegg, who tweeted this tip to help you determine who is using ChatGPT-generated content without conducting any editing or fact-checking:

Jennifer Slegg @jenstar Mar 3

“I shared this tip at Pubcon this week. Want to see who is using ChatGPT for content & doing no fact checking? Do a search for:

“Regenerate response” -chatgpt <keyword>

For added fun, take a sentence from any of those pages and search, then see how many others are using it too.”

ChatGPT Was NOT Used In the Writing of This Post

And, no, this post was NOT written by a bot.

I’d love to hear your thoughts on this debate.

Author

  • Michelle Garrett

    Michelle Garrett is a PR consultant, writer, and speaker who helps B2B businesses create content, earn media coverage, and position themselves as thought leaders in their industry. Michelle’s articles have been featured in Entrepreneur, Muck Rack and Ragan’s PR Daily, among others. She’s the founder and host of #FreelanceChat on Twitter, a co-host of #PRLunchHour on Twitter Spaces, and a frequent speaker on public relations. Michelle was named among the top ten most influential PR professionals in 2021 and 2022.

    View all posts
Facebook
Twitter
LinkedIn

You may also like…

A Year of Data-Rich Insights

We started the year with a bang — or, more precisely, with the Gathering of the Ghosts, with nearly 200 ghosts convening for a day of

It’s a good time to be a ghost.

Ghostwriters have never been more in demand — or better compensated What do you get when you poll 269 professional secret keepers about their craft?

Stay Connected

Subscribe to our monthly newsletter, Words to the Wise, for industry trends and publishing advice.