Tag: Artificial Intelligence

Insights from the FoU Program Conference: Exploring the Impact of Robots, Automation and AI on Work Environments

Last week, we had the privilege of attending the Research and Innovation Program Conference organized by AFA Försäkring. The focus was on understanding how automation, robotics, and artificial intelligence (AI) affect work environments. It was an insightful event where we got to learn from various projects, including our own Tara and Arora initiatives. The blog post photo captures a snapshot from our field visits during the TARA project.

Speakers such as Erik Billing from the University of Skövde, Kristina Palm from Karolinska University, and Eva Lindell from Mälardalen University shared their research findings and insights on how automation is changing the way we work. They discussed topics like how automation impacts job roles, the challenges of integrating new technologies into workplaces, and the importance of considering human well-being in the midst of technological advancements.

The conference emphasized the need to bridge the gap between research and practice. It highlighted the importance of finding practical solutions that benefit both workers and organizations. There was also discussion about the future of work and how we can prepare for the changes brought about by automation and AI.

Overall, the conference provided a valuable opportunity to learn, share ideas, and collaborate with others in the field. We left feeling inspired and motivated to continue our research and contribute to the ongoing conversation about the future of work in an increasingly automated world.

A Supportive Tool Nobody Talks About?

Information Technology is now being developed in a pace which is almost unbelievable. This is of course not always visible in the shape of computers, but also in the form of embedded technology, for example, in cars, home assistants, phones and so on. Much of this development is currently either driven by pure curiosity, or to cater for some perceived (sometimes imagined) user need. Among the former we may count the first version of the chatbots, where the initial research was mostly looking at how very large language models could be created. Once people became aware of the possibilities, that created a need for a service driven by the results, resulting in the new tools that are now being developed as a result.

Among the latter versions we have the development in the car industry as one example. Security has been a key issue among both drivers and car manufacturers for a long time. Most, if not all new cars today are equipped with intelligent non-locking brakes, anti-spin regulators, and even sensors that will support the driver in not crossing the lane borders involuntarily. The last feature is in fact more complex than might be thought at the beginning. Any course correction must be made in such a way that the driver feels that he or she is still in control of the car. The guiding systems have to interact seamlessly with the driver.

But there is one other ongoing development according to the latter version, which we can already see will have larger consequences for the society and for people in general. This development is also always announced as being of primary importance for the general public (at least for people with some financial funds). This is a product that resides at the far end of the ongoing development of car security systems. I am talking about the development of self-driving cars. The current attempts are still not successful enough to allow these cars to be let completely free in the normal traffic.

There are, however, some positive examples already, such as autonomous taxis in Dubai, and there are several car-driving systems that almost manage to behave in a safe manner. This is still not enough, as there have been a number of accidents with cars running in self-driving mode. But even when the cars become safe enough, one of the main remaining problems is the issue of responsibility. Who is responsible in the event of an accident? Currently, the driver is in most cases still responsible, since the law says that you cannot cease being aware of what happens in the surroundings. But, we are rapidly going towards a future where self driving cars may finally be a reality.

Why do we develop self-driving cars?

Enters the question: “Why?”. As a spoil sport I actually have to ask, why do we develop self driving cars? In the beginning, there was, of course, the curiosity aspect. There were competitions where cars were set to navigate long distances without human interception. But now, it seems to become more of a competition factor between car manufacturers. Who will be the first car producer that will cater for “the lazy driver who does not want to drive”?

It is, in fact, quite seldom that we hear any longer discussions about the target users of self-driving cars. For whom are they being developed? For the rich lazy driver? If so, that is in my opinion a very weak motivation. Everybody will of course benefit from a safer driving environment, and when (if) there is a time when there will only be self driving cars in the streets, it might be good for everybody, including cyclists and pedestrian. One other motivation mentioned has been that there are people who are unable to get a driver’s license, who would now be able to use a car.

But there is one group (or rather a number of groups) of people who would really benefit from this development when it is progressing further. Who are these people? Well, it is not very difficult to see that a group of people who would benefit the most of having self driving cars are people with severe impairments, and not least people with severe visual impairments. Today, blind people (among many others) are completely dependent on other people for their transport. In a self driving car, they could instead be free to go anywhere, anytime, just like everyone else today (if you have a valid driver’s license). This is in one sense the definition of freedom, as an extension of independence.

Despite this, we never hear this as an argument for the development of this fantastic supportive tool (as in fact, it could be). It is, as mentioned above, mostly presented as an interesting feature for techno-nerdy, rich and lazy drivers, who do not want to take the effort of driving themselves. Imagine what would happen if we could motivate this research from the perspective of supportive tools. Apart from raising the hope for millions of people who cannot drive, there would as a result also be millions of potential, eager new buyers only in the category of people who are blind or who have severe visual impairments. Adding to this also the possibility for older people who now have to stop driving due to age-related problems, who can now use the car much longer to a great personal benefit.

The self-driving car is indeed a very important supportive tool, and therefore I strongly support the current development!

This is, however, just one case among many others, on how we can motivate research also as a development of supportive tools. We just have to see the potentials in the research. Artificial Intelligence methods will allow us to “see” things without the help of our eye, make prostheses that move by will, and support people with dyslexia to read and write texts in a better way.

All it takes is a little bit of thinking outside of the box, some extra creativity, and, of course good knowledge about impairments and about the current rapid developments within (Information) Technology.

HTO Coverage: AI for Humanity and Society 2023 and human values

Mid of November in Malmö, WASP-HS held its annual conference on how AI affects our lives as it becomes more and more entwined in our society. The conference consisted of three keynotes and panels on the topics of criticality, norms, and interdisciplinarity. This blog post will recap the conference based on my takeaways regarding how AI affects us and our lives. As a single post, it will be too short to capture everything that was said during the conference, but that was never the intention anyway. If you don’t want to read through this whole thing, my main takeaway was that we should not rely on the past through statistics and data with its biases to solve the problems of AI. Instead, we should, when facing future technology, consider what human values we want to protect and consider how that technology can be designed to help and empower these values.

The first keynote was on criticality and by Shannon Vallor. It discussed metaphors for AI, she argued for the metaphor of mirrors instead of the myth that media might portray it as. We know a lot about how AI work, it is not a mystery. AI is technology that reflects our values and what we put into it. When we look at it and see it as humane, it is because we are looking for our reflection. We are looking for us to be embodied in it and anything it does is built on the distortion of our data. Data that is biased and flawed, mirroring our systematic problems in society and its lack of representation. While this might give of an image of intelligence or empathy, that is just what it is: an image. There is no intelligence or empathy there. Only the prediction of what would appear empathical or intelligent. Vallor likened us to Narcissus, caught in the reflection of ourselves that we so flawedly built into the machine. Any algorithm or machine learning model will be more biased than the data it is built on as it draws towards the norms of biases in the data. We should sort out what our human morals are and take biases into account in any data we use. She is apparently releasing a book on the topic of the AI metaphor and I am at least curious to read it after hearing her keynote. Two of the points that Vallor ends on is that people on the internet usually have a lot to say about AI while knowing very little and that we need new educations which teach what is human-centered so that it does not get lost between the tech that is pushed.

The panel on criticality was held between Airi Lampinen, Amanda Lagerkvist, and Michael Strange. Some of the points that were raised related to that we shouldn’t rush technology, that reductionistic view of a lot of the industry will miss the larger societal problems, novelty is a risk, we should worry about what boxes we are put into, and what human values do we want to preserve from technology? Creating new technology just because we can is not the real reason, it is always done for someone. Who would we rather it was for? Society and humanity perhaps? The panelists argued it would be under the control of the market forces without interventions and stupid choices are made because they looked good at the time.

The second keynote on norms and values was by Sofia Ranchordas who discussed the clash between administrative law, which is about protecting individuals from the state, and digitalization and automation, which builds on statistics that hides individuals categorised into data and groups, and the need to rehumanize its regulation. Digitalization is designed for the tech-savvy man and not the average citizen. But it is not even the average citizen that needs these functions of society the most, it is the extremes and outliers and they are even further from being tech-savvy men. We need to account for these extremes, through human discretion, empathy, vulnerability, and forgiveness. Decision making systems can be fallible, but most won’t have the insight to see it. She ended on that we need to make sure that technology don’t increase the asymmetries of society.

The panel that followed consisted of Martin Berg, Katja De Vries, Katie Winkle, and Martin Ebers. The presentations of the participants raised topics such as why do people think AI is sentient and fall in the trap of antromorphism, that statistics cannot solve AI as they are built on different epistemologies and those that push it want algorythmic bias as they are the winners of the digital market, the practical implications of limits of robots available at the market to use in research, and the issues in how we assess risk in regulation. The following discussion included that law is more important than ethical guidelines for protecting basic rights and that it both too early to regulate and we don’t want technology to cause actual problems before we can regulate it. A big issue is also the question if we are regulating the rule-based systems we have today or the technological future of AI. It is also important to remember that not all research and implementation of AI is problematic, as lot of research into robotics and automation is for a better future.

The final keynote was by Sarah Cook and about the interdisciplinar junction between AI and art. It brought up a lot of different examples of projects in this intersection such as: Ben Hamm’s Catflap, ImageNet Roulette, Ian Cheng’s Bad Corgi, and Robots in Distress to highlight a few. One of the main points in the keynote was shown through Matthew Biederman’s A Generative Adversarial Network; generative AI is ever reliant on human input data as it implodes when endlessly being fed its own data.

The final panel was between Irina Shklovski, Barry Brown, Anne Kaun, and Kivanç Tatar. The discussion raised topics as questioning the need of disciplinarity and how you deal with conflicting epistemologies, the failures of interdisciplinarity as different disciplines rarely acknowledges each other, how do you deal with different expectations of contributions and methods when different fields met, and interdisciplinar work often ends up being social scientific. A lot of, or most work, in HCI tends to end up in some regard to be interdisciplinary. As an example from the author, Uppsala University has two different HCI research groups, one at the faculty of natural sciences and one at the faculty of social sciences, while neither fits perfectly in. The final discussion was on the complexities of how to deal with interdisciplinarity as a PhD student. It was interesting and thought-provoking, as a PhD student in a interdisciplinary field, to hear the panelists and audience members bringing up their personal experiences of such problems. I might get back to the topic in a couple of years when my studies draws to a close to forward the favour and tell others about my experiences so that others can learn from them as well.

Overall, it was an interesting conference highlighting the value of not forgetting what we value in humanity and what human values we want to protect in today’s digital and automated transformation.

It’s AI, but Maybe not What you Think!

Note: This is a long article, written from a very personal take on Artificial Intelligence.

The current hype word seems to be “Artificial Intelligence” or in its short form “AI. If one is to believe what people say, AI is now everywhere, threatening everything from artists to industrial workers. There is even the (in)famous letter, written by some “experts” in the field, calling for an immediate halt to the development of new and better AI systems. But nothing really happened after that, and now the DANGER is apparently hovering over us all. Or is it?

Hint: Yes, it is, but also not in the way we might think!

The term “Artificial Intelligence” has recently been watered out in media and advertisements, so that the words almost don’t mean anything anymore. Despite this, the common ideas seem to be that we should 1) either be very, very afraid, or 2) hurriedly adapt to the new technology (AI) as fast as possible. But why should we be afraid at all, and for what? When asked, people often reply that AI will replace everybody at work, or that evil AI will take over anything from governments to the world as a whole. This latter is of course also a common theme for science fiction books and movies. Still, neither of these are really good reasons to fear the current development. But in order to understand why this is so, we need to go back to the historical roots of Artificial Intelligence.

What do we Mean by AI, then?

Artificial Intelligence started as a discipline in 1956 during a workshop at Dartmouth College, USA. During the discipline development, a distinction between two directions formed, that between strong and weak AI. Strong AI aims at replicating a human type of intelligence, whereas weak AI aimed at develop computations methods or algorithms that made use of ideas gained from Human Intelligence (often for specific areas of computation). Neural networks are, for example, representative of the weak AI directions. Today, strong AI is also referred to as AGI (Artificial General Intelligence), meaning a non-specialized artificial agent.

But in the 1950:s and 1960:s computers were neither as fast, nor as large as the current computers, which at that time imposed severe limitations on what you were able to do within the field. A large amount of work was theoretical, but there were some interesting implementations, such as Eliza, SHRDLU and not least Conceptual Dependencies (I have chosen these three examples carefully, since each of these poses some interesting properties with respect to AI, and I will explain this in the following and then follow up on the introduction):

Conceptual Dependencies : Conceptual Dependencies is an example of a very successful implementation of an artificial system with a very interesting take on knowledge representation. The system was written in the programming language LISP, and attempted to extract the essential knowledge that hid in the texts. The result was a conceptual dependency network, that could then be used successfully to summarize news articles on certain topics (the examples were taken from natural disasters and airplane hijacking). There were also attempts to make the system produce small (children’s) stories. All in all, the problem was that the computers were too small to be practically useful.

SHRDLU : SHRDLU was a virtual system that worked by having a small robot manipulating geometric shapes in a 3D modelling world. It was able to reason about the different possible or impossible moves, for example, that it is not possible to put a cube on top of a pyramid, but it is OK to do the reverse. The problem with SHRDLU was that there were some bugs in the representation and the reasoning, which ended in that it was pointed out that the examples shown were most likely preselected and did not display any general reasoning capabilities.

Eliza : The early chatbot Eliza is probably most known as the “Computer Psychologist”. It was able to keep up a conversation with a human for some time, pretending to be a Psychologist and it did so well that it was enough to actually convince some people that it was a real Psychologist behind the screen. “But, hold it!” someone may say here, “Eliza was not a real artificial intelligence! It was a complete fake!”. And yes, you would be perfectly right here. Eliza was a fraud, not so that it wasn’t a computer program, but in that it was faking the “understanding” of what the user wrote. But this is exactly the point with mentioning Eliza here. A intelligence-like behaviour may fool many, even though it does not have any “intelligent system” under the hood.

What can we Learn from AI History?

The properties of these three historical systems that I would like to point to in more detail are as follows:

  • Conceptual dependencies: AI needs some kind of knowledge representation. At least some basic knowledge must be stored in some way as a basis for the interpretation of the prompts.
  • SHRDLU : An artificial agent needs to be able to do some reasoning about this knowledge. Knowledge representation is only useful if it possible to use it for reasoning and possible generation of new data.
  • Eliza : Not all AI-like systems are to be considered to be real Artificial Intelligence. In fact Joseph Weizenbaum created Eliza in order to prove exactly how easy it was to emulate some “intelligent behaviour”.

To start with these three examples also have one interesting common property, namely that they are transparent since the theory and implementations have been made public. This is actually a major problem with many of the current generative AI agents, since they are based on large amounts of data, the source listings of which are not publicly available.

The three examples above also point to additional problems with the generative modelling approaches to AI (those that are currently considered so dangerous). In order to become an AGI (artificial general intelligence) it is most likely that there needs to be some kind of knowledge base, and an ability to reason about this knowledge. We could in fact regard the large generative AI agents as very advanced versions of Eliza, in some cases also enhanced with search abilities in order to give better answers, but as a matter of fact they don’t really produce “new” knowledge, just phrases that are the most probable continuations of the texts in the prompts. Considering the complexity of languages this is in itself no small feat, but it is definitely not a form of intelligent reasoning.

The similarity to Eliza is increased by the way the answers are given to a person, in that they are given in a very friendly form, even having the system apologize when it is pointed out that the answer it has given is not correct. This conversational style of interaction can easily fool users who are less knowledgeable about computers that there is a genie in the system, which is very intelligent and (very close to) a know-it-all. More about this problem later in this post.

Capabilities of and Risks with Generative AI?

The main problem that has arisen is that the generative AI systems cannot produce any real novelties, since the answers are based on (in the best case, extrapolation of) existing texts (or pictures). Should they by any chance produce new knowledge, there is no way to know whether this “new knowledge” is correct or not! And here is where, in my opinion, the real danger with generative AI lies. If we ask for information we either get correct “old” information, or new information which we cannot know whether it is correct or not. And we are only given one single answer per question. In this sense the chatbots could be compared to the first version of Google search, which contained a button marked “I’m feeling lucky!”, an option which just gave you one single answer, and not as now hundreds of pages to look through.

Googles search page with the “I’m feeling lucky!”-button, which has now been removed.

The chatbots also provide single answers (longer of course), but in Eliza manner wrapped in a conversational style that is supposed to convince the user that the answer is correct. Fortunately (or not?), the answers are often quite correct, so they will in most cases provide both good and useful information. However, all results still need to be “proof-read” in order to guarantee the validity of the contents. Thus, the users will have to apply critical thinking and reasoning to a high extent when using the results. Paradoxically, the better the systems become, i.e., the more of the results that are correct, the more we need to check the results in detail, especially when critical documents are to be produced where an error may have large consequences.

Impressive systems for sure!

Need we be worried about the AI development, then? No, today there is no real reason to worry about the development as such, but we need to be more concerned about the common usage of the results from AI systems. It is necessary to make sure that the users do understand that chatbots are not intelligent in the sense we normally think. They are good at (re-)producing text (and images), which most of the time make them very useful supportive tools for writers and programmers, for example. Using the bots to create text that can be used as a base for writing a document or an article is one interesting example of where this kind of systems will prove to be very important in the future. It will still be quite some time before they will be able to write an exiting and interesting novel, without the input and revision of a human author. Likewise, I would be very hesitant about using a chatbot to write a research article, or even worse, a text book in any topic. These latter usages will definitely require a significant amount or proof-reading, fact-checking and not least rewriting, before being released.

The AI systems that are under debate now are still very impressive creations, of course, and they manifest a significant progress in the engineering of software. The development of these systems is remarkable, and they have a potential to become very important in society, but they do not really produce really intelligent behaviour. The systems are very good statistical language generators, but with a very, very advanced form of Eliza at the controls

The future?

Will there be AGI, or strong AI beings in the future? Yes, undoubtedly, but this will take a long time still (I am prepared to be the laughing stock in five years, if they arrive. And these systems will most likely be integrated with the generative AI we have to day for the language management. Still, we will most likely not be able to get there, as long as we forget about the use of some kind of knowledge network underneath. It might not be in the classic form as mentioned above, but it seems that knowledge and reasoning strategies, rather than statistical models has to form some kind of underlying technology.

How probable is this different development path leading to strong AI, or AGI systems? Personally, I think it is quite probable and it seems to be doable, but I am also very curious about how the development will proceed over time. I would be extremely happy if an AGI could be born in my life time (being 61 at the time of writing).

And hopefully these new beings will be in the shape of benevolent, very intelligent agents, that can cooperate with humans in a constructive way. I still have the hope. Please feel free to add your thought in the comments below.

Welcome Back After Summer: New PhD Students and Upcoming Internal Retreat

Summer has drawn to a close, and as the autumn leaves start to fall, we are back to the academic grind. We hope you had a wonderful and refreshing break, soaking up the sun and spending time with loved ones.

We’re thrilled to announce two new PhD students joining the HTO team after the summer break—Jonathan Källbäcker and Andreas Bergqvist. We warmly welcome both and are eager to follow their PhD journey.

Jonathan and Andreas will be deeply involved in two of our most forward-thinking projects—AROA and TARA. These projects are instrumental in shaping the future of our research domain.

  • TARA Project about work environment and AI, robots and automation for ground staff at airports: You can read the detailed post here for more information on the TARA project.
  • AROA Project about work engagement and AI, robots and automation: Please follow this link to learn more about the AROA project.

This week, we are organizing an internal retreat focused on the TARA and AROA projects to integrate our new colleagues and get everyone on the same page. This retreat will span over two days, filled with intensive planning and collaborative work on the studies involved in these projects. And of course, we will have Fika—those beloved coffee breaks that are a cornerstone of Swedish work culture.

Can we use AI-generated pictures for serious purposes?

The current extremely rapid development within Artificial Intelligence is the matter of a widespread debate, and in most cases it is discussed in terms of being potential dangers to humanity and providing increased possibilities for students to cheat on examinations. When it comes to Artificial Intelligence based art or image generators (AIAG) the questions are mostly focused on similar negative issues, such as whether it really is art or if this is even going to render artists out of business. For the topic of this blog, however, I will reverse the direction of these discussion to take on a more positive and, hopefully more constructive perspective on Artificial Intelligence (*).

A small girl being very afraid of riding the elevator. Her anxiety may become a large problem for her unless treated in an early stage.

Real prompt: a small girl in an elevator, the girl looks very afraid and stands in one corner, the buttons of the elevator are in a vertical row, pencil drawing

The interesting thing is that we don’t focus the discussions more on the possibilities for these tools to be really useful, adding positively to the work. However, I will in this blog post give an example of where the use of AIAG:s as a toolcan be very important within the area of health care, and more specifically within child psychiatry. The examples are collected from an information presentation for parents to children who suffer from panic disorder. The person who has asked for the illustrations works as a counselor at a psychiatric unit for children and young people (BUP) in Sweden. Using the popular, and very powerful AI art generation application MidJourney, I have then produced the different illustrations for the presentation, some of which are now reproduced in this post.

The main captions of the images in this post are taken from the requests made by the counselor, and do not show the actual prompts used, which are in many cases much less comprehensive (shown in smaller type below).

A boy hesitates at the far end of the landing-stage, showing some fear of the evil waves that are trying to catch him.

Real prompt: a boy dressed in swimming trunks::1 is standing at the end of a trampoline::2 , the boy looks anxious and bewildered::1, as if he fears jumping into the water::3, you can see his whole body::1 pencil drawing

It is often difficult to find visual material that is suitable as illustrations in this kind of situations, where there are high requirements on integrity and data safety. Clip art is often quite boring and may also not provide any direct engagement in the viewers. The high demands of integrity delimits the use of stock photos, and copyright issues add further to the problems. Here we can see a very important application area for the Artificial Intelligence Art Generators, since these images are more or less guaranteed not to show any real human beings.

A small girl showing an all
but hidden insecurity, being
alone in the crowd on a town

Real prompt: a small girl is standing in the middle of the town square with lots of people walking by, the girl looks anxious and bewildered, as if she fears being alone, pencil drawing

The images displayed in this post are all produced according to the wishes from the councellor which I have then converted into prompts that produce the desired results. Not all attempts succeeded at once, some images had have the prompts rewritten several times in order to reach the best images. This, of course, points to the simple fact that the role of the prompt writer will be very important in the future illustration creation.

Who does not recognize the classic scare of small children: “There is a wolf in my room!” It could of course also be a monster under the bed, or any other kind of scary imaginations that will prevent the child from sleeping.

Real prompt: a small boy being very anxious when the parent leaves his room for him to sleep, he believes that there is a wolf under his bed, pencil drawing,

In the end, it is also important to point out that a good artist could of course have created all these pictures, and in even better versions. The power of the AIAG:s is, in this example, that it can enable some people to make more and better illustrations as an integrated part of the production of presentations, information material, etc. The alternative is in many cases to just leave out the illustrations, since “I cannot draw anything at all, it just turns ugly”.

Even when there are no monsters in the bedroom, just the parent leaving the child alone, might be enough to invoke a very strong panic, which is difficult for the child to handle.

Real prompt: a small boy being very anxious when the parent leaves his room for him to sleep, pencil drawing

So, to conclude, this was just one example of when Artificial Intelligence systems can be very helpful and productive, if used properly. We just need to start thinking of all the possible usages we can find for the different systems, which, unfortunately is less common than we would want, to some extent due to the large amount of negative articles and discussions that concern the development of AI systems.


(*) Here in this post the term AI is used mostly in the classic sense of “weak AI”, namely the use of methods that are based on models that are imitating processes within human thinking, which does not necessarily mean that the system is indeed “intelligent”. In this way, the systems mentioned in this post are not really considered by me to be really intelligent, although they may well be advanced enough to emulate an intelligent being.

The real dangers of current “AI”…

Can we believe what we see or read in the future? This polar bear walking in the desert does not exist, but can still affect our thoughts. (Picture: Oestreicher and MidJourney).

Recently there has been an open letter from a number of AI-experts advocating a pause in the development of new AI agents. The main reason for this is the very rapid development of chatbots based on generative networks, e.g., chatGPT and Brad, and a large number of competitors still in the starting blocks. These systems are now also publicly available at a fairly reasonable cost. The essence in the letter is that the current development is too fast for society (and humanity) to cope with it. This is of course an important statement, although we already have the social media, which when used in the wrong way has a serious impact on people in general (such as promoting absurd norms of beauty, or dangerous medical advice spreading in various groups).

The generative AI systems that are under discussion in the letter will undoubtedly have an impact on society, and we are definitely also taken by surprise in many realms already. Discussions are already here on how to “prevent students from cheating on their examinations by using chatGPT (see my earlier post about this here). The problem in that case is not the cheating, but that we teach in a way that makes it possible to cheat with these new tools. To prohibit the use is definitely not the right way to go.

The same holds for the dangers pointed to by the signers of the public letter mentioned above. A simple voluntary pausing of the development will not solve the problem at all. The systems are already here and being used. We will need to see other solutions to these dangers, and most important of all, we will need to study what these dangers really are. From my perspective the dangers have nothing to do with the singularity, or with the AI taking over the world, as some researchers claim. No, I can see at least two types of dangers, one immediate, and one that will/may appear within a few years or a decade.

Fact or fiction?

Did this chipmunk really exist? Well, in Narnia, he was a rat, named Ripipip (Picture: Oestreicher and MidJourney).

The generative AI systems are based on an advanced (basically statistical) analysis of a large number of data, either texts (as in chatBots), or pictures, as in AI art generators, (such as DALL-E or MidJourney). The output from the systems has to be generated with this data as a primary (or only) source. This means that the output will not be anything essentially new, but even more problematic, the models which are the kernel of the systems are completely non-transparent. Even if it is possible to detect some patterns in the in- and output sequences, it is quite safe to say that no human will understand the models themselves.

Furthermore, the actual text collections (or image bases, but I will leave these systems aside for a coming post) on which the systems are based, are not available to the public, which causes the first problem. We, as users, don’t know what the source of a certain detail of the result is based on, whether it is a scientific text or a purely fictitious description in a sci-fi novel. Any text generated by the chatBot needs to be thoroughly scanned with a critical mind, in order not to accept things that are not accurate (or even straightforwardly wrong). Even more problematic is that these errors are not the ones that may be simple to detect. In the words of chatGPT itself:

GPT distinguishes between real and fictitious facts by relying on the patterns and context it has learned during its training. It uses the knowledge it has acquired from the training data to infer whether a statement is likely to be factual or fictional. However, the model’s ability to differentiate between real and fictitious facts is not perfect and depends on the quality and comprehensiveness of the training data.

chatGPT 3.5

And the training data we know very little about. The solution to this problem is most of the time addressed as “wait for the next generation”. The problem here is that the next generation of models will not be more transparent, rather the opposite.

So, how is the ordinary user, who is not an expert in a field, supposed to be able to know whether the answers they get are correct or incorrect? For example, I had chatGPT producing two different texts; one giving the arguments that would prove God’s existence, and one that gave the arguments that would prove that God does not exist. Both versions were very much to the point, but what should we make of it? Today, when there are many topics that are the subjects of heated debates, such as the climate crisis, the necessity of vaccinations, etc., this “objectivity” could be very dangerous if it is not used with a fair amount of critical thinking.

Recursion into absurdity – or old stuff in new containers?

Infinite recursion inspired by M.C. Esher. (Picture: Oestreicher and MidJourney).

As mentioned above, the models are based on large amounts of texts, so far mostly produced by humans. However, today there is a large pool of productivity enhancers that provide AI support for the production of everything, from summaries to complete articles or book chapters. It is quite reasonable to assume that more and more people will start using these services for their own private creations, as well as, hopefully with some caution as per the first problem above, in the professional sphere. We can assume that when there is a tool, people will start using it.

Now, as more and more generated texts will appear on the public scene, it will undoubtedly mix in with the human-created text masses. Since the material for the chatBots needs to be updated regularly in order to keep up with the developments in the world, the generated texts will also slowly but steadily make their way into the materials and in the long run be recycled as new texts adding to the information content. The knowledge produced by the chatBots will be more and more based on the generated texts, and my fear is that this will be a very rapidly accelerating phenomenon that may greatly affect the forthcoming results. In the long run, we may not know whether a certain knowledge is created by humans or by chatbots that generate the new knowledge from the things we already know.

This recursive loop of traversing the human knowledge base mixed with the results from the generative AI-systems may not be as bad as might be considered, but it might also lead to a large amount of absurdity being produced as being factually correct knowledge. In the best case, we can be sure that most of the generated texts in the future will consist of old stuff being repackaged into new packages.


What can be seen through the knowledge lens of the chatbots that are emerging (Picture: Oestreicher and MidJourney).

So, what are my conclusions from this? Should we freeze the development of these systems, as proposed in the open letter? We could, but I do not think that this will solve any problems. We have opened the box of Pandora, and the genie is already out of his bottle. In my perspective, the issue is more on learning how to use this knowledge in order to have it work for us in the best way. Prohibitions and legal barriers have never proved to stop people from doing things. The solution is instead the promotion of knowledge, not least to the main sources of education, and I do not just mean the schools and universities, but journalists and writers in general, as well as people who will be using this.

Already with social media, the problem with “fake news” and “fake science” has been a big problem, but as long as people will regard information from external sources (such as social media, Google searches, facebook or reddit groups, and now chatBots) as truths, and swallow the information from these as plain truths, we can pause the development of GPTs as much as we like and the problem will not go away. We started on this path already with the fast developments of social media, and it will not go away just because we cover our eyes.

So, I urge you as a reader to read this article with a critical mind, and don’t just believe everything that is written here. You know, I just might be completely wrong about this, myself.

AI and How Education Needs to Change

But will the new tools really make it possible to cheat that much? Well, if we maintain the old style of teaching and examining, the answer is undoubtedly “yes”. However, we can also see this as a possibility to improve, or even revolutionize both education and examination. This, of course, need some changes to be implemented. I will explain my thoughts a bit more in the following.

When we look at our teaching obligation, we need to pose the question: “What do we want our students to learn?”. Well, knowledge about the topic at hand, of course. But is that really true? In the first run, what do we define as knowledge? In many cases, the things that appear on the exams are questions about details, details that they will be able to google as soon as they get outside of the examination hall. Home exams are slightly better, since the students will have to synthesize the answers to the exam, rather than just look them up. But now you can ask a program like chatGPT to do the synthesis for you. And is that cheating? In our old apperception of examination, of course it is. What has the student done to get the piece of text written? Not very much!

Is the classical teaching doomed? No, but it needs to adapt to the new conditions. (Source: L. Oestreicher)

However, when we look closer at this, we can change the question a little, and see what happens? The new question would be something in the way of: “How could we change the way of teaching and examination so that this kind of helping tool will not be a cheating possibility (but maybe even a learning tool)?”. My answer to this question is to focus on understanding. My favourite meme for teaching is: “You can lead a camel to the water, but you cannot force it to drink”. As teachers in higher education, teaching will have to focus more on the “How it works” and “Why it works” of the topics, rather than the “How can I implement it”. The students’ understanding of the (role of the) acquired knowledge in the applicable context has to be the most important teaching goal.

But don’t we do this? Some people may already do so, but we still see many exam questions that focus on the student memorizing the content of the course, rather than understanding how to synthesize the answers through their understanding and their skills in reusing this understanding in transferring their knowledge to new domains.

I have in my teaching changed my examination of the students in my courses (one more theoretical, and two practical programming courses) changing the written examination into an oral “discussion”. That may sound like a lot of work, but in fact, it does not take more time than having a written exam. After 30 minutes of this “academic conversation” style of examination, I have most of the time no problem grading the student according to understanding and reasoning, rather than remembering a lot of details (which are most of the time forgotten fairly quickly after the course). This change was in fact introduced many years ago, way before the occurrence of chatGPT and similar systems.

The benefits here are also the new possibilities of actually allowing the students to use any kind of supportive tool, including in this case the chatGPT, for their projects and learning experiences. The only condition that they have to fulfill is that they themselves have to understand the answers they get from the various tools they use. In the programming courses, that, e.g., means they will have to explain any piece of code that they have not written all by themselves. They will also be told that errors that stem from the information source that remain, will affect their grades negatively. This of course applies to both text and code.

With this approach both to teaching and examination we will turn this risk of “cheating” into an improved pedagogical view of courses and the role of the teacher. Of course, it will still require the teacher to be well educated in the topic, in order to both teach and examine the students.

Lars Oestreicher