Recently there has been an open letter from a number of AI-experts advocating a pause in the development of new AI agents. The main reason for this is the very rapid development of chatbots based on generative networks, e.g., chatGPT and Brad, and a large number of competitors still in the starting blocks. These systems are now also publicly available at a fairly reasonable cost. The essence in the letter is that the current development is too fast for society (and humanity) to cope with it. This is of course an important statement, although we already have the social media, which when used in the wrong way has a serious impact on people in general (such as promoting absurd norms of beauty, or dangerous medical advice spreading in various groups).
The generative AI systems that are under discussion in the letter will undoubtedly have an impact on society, and we are definitely also taken by surprise in many realms already. Discussions are already here on how to “prevent students from cheating on their examinations by using chatGPT (see my earlier post about this here). The problem in that case is not the cheating, but that we teach in a way that makes it possible to cheat with these new tools. To prohibit the use is definitely not the right way to go.
The same holds for the dangers pointed to by the signers of the public letter mentioned above. A simple voluntary pausing of the development will not solve the problem at all. The systems are already here and being used. We will need to see other solutions to these dangers, and most important of all, we will need to study what these dangers really are. From my perspective the dangers have nothing to do with the singularity, or with the AI taking over the world, as some researchers claim. No, I can see at least two types of dangers, one immediate, and one that will/may appear within a few years or a decade.
Fact or fiction?
The generative AI systems are based on an advanced (basically statistical) analysis of a large number of data, either texts (as in chatBots), or pictures, as in AI art generators, (such as DALL-E or MidJourney). The output from the systems has to be generated with this data as a primary (or only) source. This means that the output will not be anything essentially new, but even more problematic, the models which are the kernel of the systems are completely non-transparent. Even if it is possible to detect some patterns in the in- and output sequences, it is quite safe to say that no human will understand the models themselves.
Furthermore, the actual text collections (or image bases, but I will leave these systems aside for a coming post) on which the systems are based, are not available to the public, which causes the first problem. We, as users, don’t know what the source of a certain detail of the result is based on, whether it is a scientific text or a purely fictitious description in a sci-fi novel. Any text generated by the chatBot needs to be thoroughly scanned with a critical mind, in order not to accept things that are not accurate (or even straightforwardly wrong). Even more problematic is that these errors are not the ones that may be simple to detect. In the words of chatGPT itself:
GPT distinguishes between real and fictitious facts by relying on the patterns and context it has learned during its training. It uses the knowledge it has acquired from the training data to infer whether a statement is likely to be factual or fictional. However, the model’s ability to differentiate between real and fictitious facts is not perfect and depends on the quality and comprehensiveness of the training data.
chatGPT 3.5
And the training data we know very little about. The solution to this problem is most of the time addressed as “wait for the next generation”. The problem here is that the next generation of models will not be more transparent, rather the opposite.
So, how is the ordinary user, who is not an expert in a field, supposed to be able to know whether the answers they get are correct or incorrect? For example, I had chatGPT producing two different texts; one giving the arguments that would prove God’s existence, and one that gave the arguments that would prove that God does not exist. Both versions were very much to the point, but what should we make of it? Today, when there are many topics that are the subjects of heated debates, such as the climate crisis, the necessity of vaccinations, etc., this “objectivity” could be very dangerous if it is not used with a fair amount of critical thinking.
Recursion into absurdity – or old stuff in new containers?
As mentioned above, the models are based on large amounts of texts, so far mostly produced by humans. However, today there is a large pool of productivity enhancers that provide AI support for the production of everything, from summaries to complete articles or book chapters. It is quite reasonable to assume that more and more people will start using these services for their own private creations, as well as, hopefully with some caution as per the first problem above, in the professional sphere. We can assume that when there is a tool, people will start using it.
Now, as more and more generated texts will appear on the public scene, it will undoubtedly mix in with the human-created text masses. Since the material for the chatBots needs to be updated regularly in order to keep up with the developments in the world, the generated texts will also slowly but steadily make their way into the materials and in the long run be recycled as new texts adding to the information content. The knowledge produced by the chatBots will be more and more based on the generated texts, and my fear is that this will be a very rapidly accelerating phenomenon that may greatly affect the forthcoming results. In the long run, we may not know whether a certain knowledge is created by humans or by chatbots that generate the new knowledge from the things we already know.
This recursive loop of traversing the human knowledge base mixed with the results from the generative AI-systems may not be as bad as might be considered, but it might also lead to a large amount of absurdity being produced as being factually correct knowledge. In the best case, we can be sure that most of the generated texts in the future will consist of old stuff being repackaged into new packages.
Conclusions
So, what are my conclusions from this? Should we freeze the development of these systems, as proposed in the open letter? We could, but I do not think that this will solve any problems. We have opened the box of Pandora, and the genie is already out of his bottle. In my perspective, the issue is more on learning how to use this knowledge in order to have it work for us in the best way. Prohibitions and legal barriers have never proved to stop people from doing things. The solution is instead the promotion of knowledge, not least to the main sources of education, and I do not just mean the schools and universities, but journalists and writers in general, as well as people who will be using this.
Already with social media, the problem with “fake news” and “fake science” has been a big problem, but as long as people will regard information from external sources (such as social media, Google searches, facebook or reddit groups, and now chatBots) as truths, and swallow the information from these as plain truths, we can pause the development of GPTs as much as we like and the problem will not go away. We started on this path already with the fast developments of social media, and it will not go away just because we cover our eyes.
So, I urge you as a reader to read this article with a critical mind, and don’t just believe everything that is written here. You know, I just might be completely wrong about this, myself.