Created by Materia for OpenMind Recommended by Materia
5
Start Possible Effects of AI Writing Systems on the Quality of Online Content
16 November 2022

Possible Effects of AI Writing Systems on the Quality of Online Content

Estimated reading time Time 5 to read

In a previous article I described the problems and progress of AI reading comprehension systems. In the last few years, AI writing systems have also improved significantly because of the emergence of an AI neural network called GPT-3. It’s barely two years since GPT-3 was created but the number use cases. It has paved a path for numerous business start-ups including, story writing, blog writing, chatbots, news report writing and even quiz generation. The list is continuing to grow as developers become aware of its potential. In this article, I briefly describe the evolution of AI writing systems and GPT-3 and show that the Internet could become awash with GPT-3 generated content shortly.

Story writing using AI

Experiments that use AI for story writing have been tried for many years.  For example, Sharples and Perez [1] describe automatic novel writing programs that began in the 1960s. They were mostly collaborations between human and machine. However, it was a programmer named Scott French who was the first to claim that he had created an entire AI-written novel. The novel was called “Just This Once” and it was written in such a way that it emulated the style of the 1960s author Jacqueline Susann. Scott did this by using symbolic AI explicitly written rules.  For example, his program used “If… then” rules to show how a character would react to an event. It would also use rules to show how the author would be likely to describe some action in words. This AI book writing project didn’t end well for Scott because he was later sued by the estate of Jacqueline Susann for copying her style. The problem with using rule-based AI was that it was very time-consuming – taking eight years for Scott to complete this book. Other automatic writing projects encountered similar problems. Symbolic AI lacked the learning capabilities of neural networks. This meant that time-consuming human programming was the only way that this type of project could be completed.

Just This Once is a 1993 romance novel written in the style of Jacqueline Susann by a Macintosh IIcx computer named "Hal" in collaboration with its programmer, Scott French.[
Just This Once is a 1993 romance novel written in the style of Jacqueline Susann by a Macintosh IIcx computer named “Hal” in collaboration with its programmer, Scott French. Credit: Wikimedia

However, AI can now write novels and many other types of documents using machine learning neural networks – sometimes in a matter of minutes. GPT-3 has revolutionized AI writing and given rise to many start-ups covering a diverse range of applications. GPT-3 is an acronym for Generative Pre-Trained Transformer Version 3. It was developed by an organization that was founded by Elon Musk, called OpenAI. In many cases, its output is such that readers have difficulty in distinguishing it from human-created fiction.

How GPT-3 works

GPT-3 is a machine-learning language model which means that the user can input from a prompt an incomplete sequence of words and generate text language as output from this prompt.  Hence, it can be thought of as an autocomplete language model – rather like when a user is creating an email and is interrupted by the system by suggesting words to complete the sentence and reduce writing effort for the user. GPT-3 has been trained with a massive data set of about 175 billion parameters (such as neuron weights) that were collected from the Internet using Websites like Reddit, Wikipedia, Google, and more – far more text than any human will see during their lifetime. Generative program training works by inputting a part of a sentence, or text section, and asking it to predict the next word that follows. This output is then matched with the correct word. As with all neural networks, the weights will be adjusted according to how well they match the correct word.  As more examples are input, the weights of connections are adjusted so that the network predicts closely the next word. Once trained, a user can input a sequence of words and generate a sequence of words as output. Thus, the generative text could be a sentence, paragraph, or even more, such as a short story. The user can view a score with the output to see the likelihood of words being used. GPT-3 offers flexibility beyond simply generating text.  It can also perform many other language tasks, such as evaluating textual content, answering questions on every conceivable topic, summarizing text, and even translating languages.

BBVA-OpenMind-Keith Darlington- Effects of AI Writing Systems on the Quality of Online Content
Once trained, a user can input a sequence of words and generate a sequence of words as output.

According to Tingiris [2], it is estimated that GPT-3 has been trained on 57 times the number of words that have been written, read, and spoken, during an average human lifetime. This means that the size of GPT-3 is such that it cannot be downloaded onto a laptop or any business computer. However, OpenAI makes it available through an Application Programmable Interface or API as they are known. This means that anyone can use GPT-3 with access to the API.  Its uses are many and include novel writing, blogs, poetry writing and business reports and joke generators.

Limitations and concerns about GPT-3

For all its strengths, GPT-3 does have some limitations. It has a prompting window size of about 1000 words. This is enough for short stories but means that a medium-length story may tend to meander as it increases the amount of content produced. This problem can be largely solved if the user is given follow-up prompts or the system follows a template. GPT-3 is also quite slow in operation because of its size, affecting its ability to explain its output. There is also the possibility of bias due to the data that has been used for training the network. There are ethical concerns about GPT-3 – particularly the generation of fake news stories and the possibilities of creating ethnically sensitive content.

The consequences for the quality of Internet content

Many researchers are predicting that a great deal of Internet content will adversely change with the advent of GPT-3. Some believe that the Internet will become awash with mediocre content because GPT-3 will deliver copious amounts at a fraction of the cost of using humans to create content. A massive splurge of mediocre content could lead to a lack of trust since the quality could become diluted by fake news or false content. The reason is that GPT-3 cannot yet reflect on what it has written because, whilst it may be a good wordsmith, it does not understand what it has written. Human content moderation is one way of solving this problem. But that would become impossible if GPT-3 delivers huge amounts of extra content – as is being predicted.

BBVA-OpenMind-Darlington-The consequences for the quality of Internet content
Human content moderation is one way of solving the problem of GPT-3 not understanding what it has written

Conclusions

GPT-3 has, without a doubt, ushered in a new wave of AI language applications that will lead to improvements in communication with computers. It’s also likely to trigger a move towards the automation of Internet documents – such as football match reports. The future of AI language communication has taken a new trajectory as a consequence of GPT-3. But notwithstanding its phenomenal learning capabilities, it still lacks the semantic understanding of language. Despite appearances, it does not deliver human-level long-story writing capabilities yet because the neural network architecture of GPT-3 provides the learning capabilities  without being able to comprehend its meaning. In other words, the chasm of understanding between humans and AI is yet to be bridged. This means it doesn’t have the capabilities to reflect on what it has written – even though what may have been written may well have been done so as competently as a human author. Since reflection is an essential part of writing, the current version works well as a tool to aid writers in generating story content but may still need human collaboration. We will be hearing much more about GPT-3 in the future.  

Keith Darlington

References

[1]          Story Machines: How Computers Have Become Creative Writers by Mike Sharples and Rafael Perez Y Perez.

[2]          Exploring GPT-3 by Steve Tingiris.

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved