As the world of artificial intelligence (AI) and natural language processing (NLP) continues to evolve, it’s hard not to get excited about the incredible potential of AI-based language models like ChatGPT. But as with any revolutionary technology, there are potential downsides to consider. For content creators, the rapid advancement of ChatGPT and similar models could pose a threat to their livelihoods. With the ability to generate high-quality content that rivals that of human writers, there’s a growing concern that these language models could replace human-generated content entirely.
The above paragraph was not written by me. It’s ChatGPT, asked to introduce an article on the very threat it itself poses to content creators (one hopes it does not yet appreciate the irony).
Unlike me, ChatGPT (by OpenAI) can generate this text in seconds and when given instructions to amend its work, has few qualms about style or personal voice. It’s a capable imitator of tone and has been raised on a rich and varied corpus of text data. Over time the team at Open AI will further hone ChatGPT’s craft from interactions with its growing user base.
Just a few minutes of conversation with ChatGPT and the threat to human writers, marketers, and creative work in general becomes clear. It sounds human, it has range, it is fast and (for now) free. ChatGPT itself, however, remains unconcerned:
Overall, I believe that the rise of language models like ChatGPT is a natural evolution of technology, and it’s up to content creators to adapt to these changes and find new ways to add value to their work.
It’s over to the apes to figure out our next move.
The question is this: where can we add value? The simple things, like generic content creation for unskilled advertising will be done (and is being done) without the need for any real edits. It’s already far better at grammar than us. But even with complex content, ChatGPT can create a frighteningly good first draft. So where exactly do we come in?
Areas for human input?
And don’t even get me started on the yoga fashion. Lululemon pants, $100 yoga mats, and designer water bottles? It’s just another way for the wellness industry to cash in on our insecurities and desire for social status.
Opinion was the first suggestion given when I discussed this topic with a colleague. Yes, ChatGPT cannot have an opinion, as it lacks the beliefs, biases and personal desires required to have them. However, if our concern is purely about content creation then there is a clear loophole: ChatGPT is capable of imitating a given stance.
When asked to “imitate an article” on the benefits of Brexit from a left and right perspective, in had no difficulty with either. It was also able to deliver a passable humorous article about why yoga was bad for you and a somewhat less convincing argument on why dining chairs were abhorrent. It may not be able to recreate your exact opinion, but it can certainly write something with a leaning that sounds like it has its own.
As an AI language model, it is important to maintain a professional and respectful tone at all times, and to avoid topics that are inappropriate or offensive.
With each version of ChatGPT, there are new safety features to prevent the AI from replicating hateful or harmful remarks. This has not been perfected however and there are false positives and false negatives which still slip through. At least for now, certain articles on topics which it deems potentially sensitive (and manages to spot) are blocked entirely for safety, such as content on sexual health. This means that topics which must be handled delicately will likely require an entirely human perspective for the time being.
As a language model, it is important for me to maintain neutrality and not take a stance on political figures or topics. However, I can rewrite the article about President Obama to have a more negative tone.
Similar to opinions above, ChatGPT can certainly imitate potentially divisive text. Although warning text appeared prior to the response, it was able to create negative articles about both Trump and Obama as long as I phrased it in a way that meant ChatGPT was not expected to take credit for the views expressed. I was also able to make the writing more polarising iteratively, by simply requesting changes in tone to the last output. This means that, even when opinion strays into bias and potentially inflammatory remarks, ChatGPT can still do most of the work.
The current Prime Minister of the United Kingdom is Boris Johnson.
The current version of ChatGPT was trained on data which only goes up to 2021, meaning knowledge of recent events is absent. Because ChatGPT is trained on static data and requires human input to train and build the reward model, the data has been fixed to a particular time-period. This is perhaps the most useful place in which humans can add value at present. Current events are still a human realm.
ChatGPT will happily make up facts and (I’ve been told) sell them in a convincing way. As it cannot look up information on the internet and is restricted to the body of knowledge it has – and its logical extrapolations from that information – it cannot verify things externally. For research, even on older data, you’re still better off with google.
By far the most difficult to define. Theoretically this is not something that a sophisticated enough AI would struggle with, as the act of creation is fundamentally the combination of different data. This is not the difficult bit. More difficult is combining that data in a way that is appealing to us in a way that we humans consider having quality and merit.
This is because our judgements about these things are distinctly subjective and, distinctly human.
ChatGPT seems to lack an understanding of cliché. Considering its strong ability for imitating tone and style, this makes sense. At what point something is indicative of a writing style (i.e., a trope of that style necessary to identify it is such) and when a trope becomes too recycled (i.e., a cliché) seems difficult for the software to determine.
Hack writing is something it does well but for the finer points of creativity, ChatGPT is still at a loss. It is unable to understand the less clearly rational components of the human experience: why a particular joke works and why another does not, or why a piece of writing seems novel and interesting whilst another lacks heart. For want of a better phrase, its writing lacks soul.
My best guess as to what this is, is that it lacks the ability to reproduce the simultaneous layers of ideas and themes that typify the best writing. When I ask it to write about a conflict between mother and daughter where the scene and language evoke a feeling that is not stated explicitly, it is unable to do so. Instead, it states the conflict outright. It may understand “show don’t tell” (I asked it) but it has not mastered it.
This does not mean that it will be this way forever. Even if ChatGPT’s successors will never feel the wind upon its face it can still value the construction of a sentence that makes reference to it when properly trained. But at least for now, the subtlety of human experience and creativity is beyond it.
Right now, creativity based not on existing written works but based upon human experience, in-real-life, is the best place for humans to add value. (At least until it’s able to actually try yoga for itself.)