As many of you know, writing is not always easy and it can sometimes be challenging to find the time or space to focus on your writing. That’s where a writing retreat can come in handy. The HTO research group has a long history of arranging writing retreats and this year is no exception. A writing retreat is essentially a dedicated time and place for you to write and last week, people from the HTO research group and people from our networks came together for two full days with the sole purpose to write.
The underlying idea of our writing retreats is that everyone decides for themselves what to write and then we sit together and work on our texts. The key to any writing retreat is that it should take you away from your usual responsibilities and commitments, and it should allow you to focus on your writing. This year, we arranged the retreat in our usual building but booked a room on another floor to help us focus on nothing else but writing. To further increase our potential to be truly productive, we keep to a schedule in which we start by setting writing goals and making plans for the writing we’ll do. This is followed by writing sprints and a few check-ins throughout the day. Of course, this schedule also has plenty of dedicated time for rest and delicious fika.
Besides having time and space to write, the benefits of attending a writing retreat are many. To meet other writers and talk about writing is rewarding in itself and the social pressure of hearing others tap away on their keyboards is quite helpful to get you started with your own writing. The retreat also creates space for reflection on your writing habits and many of us find that it encourages commitment to your writing which hopefully spills over to when you’re not at the retreat. Finally, a writing retreat is really fun and a valuable investment of your time.
The current extremely rapid development within Artificial Intelligence is the matter of a widespread debate, and in most cases it is discussed in terms of being potential dangers to humanity and providing increased possibilities for students to cheat on examinations. When it comes to Artificial Intelligence based art or image generators (AIAG) the questions are mostly focused on similar negative issues, such as whether it really is art or if this is even going to render artists out of business. For the topic of this blog, however, I will reverse the direction of these discussion to take on a more positive and, hopefully more constructive perspective on Artificial Intelligence (*).
A small girl being very afraid of riding the elevator. Her anxiety may become a large problem for her unless treated in an early stage.
Real prompt: a small girl in an elevator, the girl looks very afraid and stands in one corner, the buttons of the elevator are in a vertical row, pencil drawing
The interesting thing is that we don’t focus the discussions more on the possibilities for these tools to be really useful, adding positively to the work. However, I will in this blog post give an example of where the use of AIAG:s as a toolcan be very important within the area of health care, and more specifically within child psychiatry. The examples are collected from an information presentation for parents to children who suffer from panic disorder. The person who has asked for the illustrations works as a counselor at a psychiatric unit for children and young people (BUP) in Sweden. Using the popular, and very powerful AI art generation application MidJourney, I have then produced the different illustrations for the presentation, some of which are now reproduced in this post.
The main captions of the images in this post are taken from the requests made by the counselor, and do not show the actual prompts used, which are in many cases much less comprehensive (shown in smaller type below).
A boy hesitates at the far end of the landing-stage, showing some fear of the evil waves that are trying to catch him.
Real prompt: a boy dressed in swimming trunks::1 is standing at the end of a trampoline::2 , the boy looks anxious and bewildered::1, as if he fears jumping into the water::3, you can see his whole body::1 pencil drawing
It is often difficult to find visual material that is suitable as illustrations in this kind of situations, where there are high requirements on integrity and data safety. Clip art is often quite boring and may also not provide any direct engagement in the viewers. The high demands of integrity delimits the use of stock photos, and copyright issues add further to the problems. Here we can see a very important application area for the Artificial Intelligence Art Generators, since these images are more or less guaranteed not to show any real human beings.
A small girl showing an all but hidden insecurity, being alone in the crowd on a town square.
Real prompt: a small girl is standing in the middle of the town square with lots of people walking by, the girl looks anxious and bewildered, as if she fears being alone, pencil drawing
The images displayed in this post are all produced according to the wishes from the councellor which I have then converted into prompts that produce the desired results. Not all attempts succeeded at once, some images had have the prompts rewritten several times in order to reach the best images. This, of course, points to the simple fact that the role of the prompt writer will be very important in the future illustration creation.
Who does not recognize the classic scare of small children: “There is a wolf in my room!” It could of course also be a monster under the bed, or any other kind of scary imaginations that will prevent the child from sleeping.
Real prompt: a small boy being very anxious when the parent leaves his room for him to sleep, he believes that there is a wolf under his bed, pencil drawing,
In the end, it is also important to point out that a good artist could of course have created all these pictures, and in even better versions. The power of the AIAG:s is, in this example, that it can enable some people to make more and better illustrations as an integrated part of the production of presentations, information material, etc. The alternative is in many cases to just leave out the illustrations, since “I cannot draw anything at all, it just turns ugly”.
Even when there are no monsters in the bedroom, just the parent leaving the child alone, might be enough to invoke a very strong panic, which is difficult for the child to handle.
Real prompt: a small boy being very anxious when the parent leaves his room for him to sleep, pencil drawing
So, to conclude, this was just one example of when Artificial Intelligence systems can be very helpful and productive, if used properly. We just need to start thinking of all the possible usages we can find for the different systems, which, unfortunately is less common than we would want, to some extent due to the large amount of negative articles and discussions that concern the development of AI systems.
(*) Here in this post the term AI is used mostly in the classic sense of “weak AI”, namely the use of methods that are based on models that are imitating processes within human thinking, which does not necessarily mean that the system is indeed “intelligent”. In this way, the systems mentioned in this post are not really considered by me to be really intelligent, although they may well be advanced enough to emulate an intelligent being.
As I sit down to reflect on my experience with ENTWINE Informal Care, I am filled with gratitude for the opportunities that this Marie Skłodowska Curie Innovative Training Network (MSCA-ITN) funded by the European Union Horizon2020 has provided me. It’s been a journey of growth, learning, and collaboration that has impacted my personal and professional life in ways I couldn’t have imagined. The program began in March 2019, but my journey with the ENTWINE project began in October 2019 when I moved to the beautiful island of Gotland in Sweden. I was thrilled to be a part of this program as informal caregiving was already interesting to me personally, given that I had been an informal caregiver for my father for over a year. So, I have a personal motivation to work in this area. I am also interested in the field of designing IT systems, and I was delighted to find that these two interests aligned so well in my project in ENTWINE.
One of the most exciting aspects of ENTWINE was the opportunity to work with other PhD students hosted across five different countries in Europe. You may read more about ENTWINE and the research done here. The cohort was diverse, and we all brought our unique experiences and perspectives to the table. The training courses offered through ENTWINE were invaluable in helping us develop the skills and knowledge needed to conduct high-quality research in the field of caregiving. We received training in areas such as caregiving, persuasive designing, positive technology research methods, entrepreneurship, and many more. The courses were rigorous and challenging, but they were also fun and engaging. It was clear that the program coordinators had put a lot of effort into designing a curriculum that would equip us with the skills and knowledge we needed to contribute positively in our respective fields. Another highlight of the program was the opportunity to work with multiple industry and academic partners. We had the chance to discuss our work and learn from them through dedicated research secondments. My secondments were at the University of Twente and the University of Oulu, where I spent three months each. These secondments helped shape my PhD and helped form good collaborations.
AnhörigCare: An eCoaching Application for Informal Caregivers in Sweden
My PhD work focuses on designing a persuasive eCoaching application (AnhörigCare) for informal caregivers in Sweden. ‘AnhörigCare’ means caring for ‘anhöriga’, the Swedish word for relatives. Informal caregivers are individuals who take care of their sick family members or friends suffering from a long-term illness. Caregiving can be difficult and can affect the well-being of the caregiver and they often experience stress and anxiety. As a result, my research focuses on designing an eCoaching application called AnhörigCare for caregivers in Sweden to support them in their caregiving activities and assist them in self-care.
‘AnhörigCare’ means caring for ‘anhöriga’, the Swedish word for relatives.
Persuasive System Design
AnhörigCare was designed using the Persuasive System Design (PSD) model. The PSD model is a comprehensive framework created to assist in designing systems that can impact users’ behavior. It offers designers a methodical approach to designing persuasive IT applications that are tailored to the specific needs of caregivers, making them more effective in assisting them to achieve their objectives. It also presents a structured design strategy for creating appealing and practical interventions. Additionally, the PSD model offers organized design principles that can be utilized by designers. This model proposes 28 design principles grouped into four dimensions: primary task support, dialogue support, credibility support, and social support. The first dimension, primary task support, aims to assist users in accomplishing their intended behavior. The second dimension, dialogue support, employs design principles that encourage users through feedback and interaction with the application. The third dimension, credibility support, employs techniques that enhance the application’s perceived credibility and trustworthiness by the user. The fourth and final dimension, social support, uses methods to leverage social influence (illustrated in the figure below).
The Design Process
AnhörigCare aims to provide access to practical information, access to formal services related to caregiving, and access to an online forum that can connect caregivers with each other to feel part of a community. This figure illustrates the activities in this project to design the final version of AnhörigCare.
We started with a review of literature. The extant literature points to access to information regarding caregiving, access to formal services to assist caregivers, feeling of community, words of acknowledgment and encouragement, self-care, and informal peer support as major needs of caregivers. The descriptions of their needs were compared with the persuasive design principles from the PSD model. Based on this match, design principles was chosen to meet their needs, creating the first version of AnhörigCare. Expert evaluations were then conducted on this version and changes to navigation and presentation of content were made. In the next step, we interviewed caregivers in Sweden to elicit their needs for an eCoaching application. Based on these needs we presented some design suggestions to further update AnhörigCare. After which we conducted design workshops with caregivers as a means to involve them in the design of AnhörigCare and finally a scenario-based user testing.
Based on the design workshops and user testing, the final design of AnhörigCare will be created. Here are some initial screenshots of AnhörigCare.
Watch this space for upcoming articles on this research!
Cancer is a devastating disease that affects not only the patient but also their informal caregivers, who play a crucial role in caring for them, especially in the home environment. The role of a caregiver can be physically and emotionally exhausting, leading to stress, anxiety, depression, and post-traumatic stress disorder. With limited resources and information available to informal caregivers, the situation becomes even more challenging. In this context, eHealth applications might help caregivers to cope with their caregiving responsibilities and enhance their well-being.
The Carer-eSupport project is a commendable effort to provide support to informal caregivers of head and neck cancer patients. I am part of the Carer-eSupport project as a PhD student. The project’s overall goal is to prepare caregivers for their caregiving role and decrease their caregiving burden who often struggle to balance their caregiving responsibilities with their personal and professional lives.
In this project, we first gathered user needs and preferences from caregivers and healthcare professionals to ensure that the intervention is user-friendly, effective and acceptable. Based on these findings, the first version of Carer-eSupport is developed, followed by feasibility studies to evaluate its effectiveness and acceptability. The results from these studies will inform the design of the second version of Carer-eSupport. Thereafter, it will be tested through a randomized controlled trial, which will provide robust evidence of the intervention’s effectiveness. The project’s study protocol, “Internet-based support for informal caregivers to individuals with head and neck cancer (Carer-eSupport): a study protocol for the development and feasibility testing of a complex online intervention,” provides more detailed information. By prioritizing the needs and well-being of caregivers, the Carer-eSupport project has the potential to make a significant impact on the lives of informal caregivers of head and neck cancer patients.
User-centred Positive Design (UCPD) framework
To support informal caregivers’ subjective well-being, we proposed a User-centred Positive Design (UCPD) framework that combines User-Centred Design (UCD) and Positive Design Framework (PDF) as shown in the figure below. UCD is a systematic approach that considers users and their needs in all steps of design and development. PDF, on the other hand, describes how design can enhance the subjective well-being of users. By focusing on the subjective well-being of users, the UCPD framework aims to create eHealth applications that not only solve the user’s problem but also have a long-lasting and positive impact on their well-being.
In conclusion, the UCPD framework provides a theoretical framework for designing internet-based support systems that have a positive, holistic impact on users’ well-being. The Carer-eSupport project serves as an excellent example of how the UCPD framework can be applied in designing eHealth applications for informal caregivers of cancer patients. With further research, the UCPD framework has the potential to enhance the subjective well-being of users across various domains of healthcare.
Our research team comprises researchers from different disciplines, including human-computer interaction (HCI), software engineering, cancer nursing, and medical research. Following are the team members.
Particularly as we move forward following the recent COVID-19 pandemic, there has been an increase in the use of software in healthcare systems to support healthcare management and prevention. In Ireland, for example, there has been an increase in on-line consultations with General Practitioners (GPs)/ Family Physicians. This has resulted in the submission of prescriptions directly to pharmacies, where the patient can collect their medication. This minimises human contact, which was important during the pandemic, minimises travel for patients who may have difficulty getting to the doctor’s surgery, and makes it easier for GP surgeries to cater for patients over a wider area. There is potential for such systems to expand and become more pervasive, particularly as we are seeing a decrease in the number of medical practices in rural areas and an increase in population nationally. Those with health conditions can potentially use software to monitor their physiological measures, allowing the doctor to make decisions about their care in a different manner – thus software development and support must become more efficient and effective.
Healthcare software for use by individual patients is increasingly coming in the form of apps on the smartphone – therefore, the needs of particular cohorts need to be accounted for. In our research in Lero – the Science Foundation Ireland Software Research Centre, we have developed fundamental requirements for the development of software for use by Older Adults and for Persons with Mild Intellectual and Developmental Disability. Why these cohorts? We know that the number of Older Adults is increasing globally and that this is putting pressure on healthcare systems, and so it is important for software developers to take their fundamental requirements into account. Persons with Mild Intellectual and Developmental Disability have specific requirements, and there is evidence that the lack of accessibility and usability for this cohort is causing the digital divide to increase. Of course, we can consider other cohorts! For example, what about nursing staff whose primary aim is to care for the patient, do they need to be trained in system use, or can software developers consider fundamental requirements for them to ensure that they can use systems efficiently and effectively? We believe that if software developers know these fundamental requirements, which we present in the form of ‘recommendations’ for the software developer, then healthcare software developed with be ultimately ‘easier to use’ by those who really need to use it! Each recommendation developed is supported by the detail obtained through literature review, standards and regulations review, focus groups, observation, prototype review, interviews, surveys and analysis of app store comments.
In our research we have developed 44 recommendations for the development of software for Older Adults, categorised into 28 Usability and 16 Accessibility requirements, 6 of which are shown in Figure 1.
Figure 1: Six recommendations which can be used in the development of software for Older Adults
We have also developed 46 recommendations for the development of software for Persons with Mild Intellectual and Developmental Disability, categorised into 20 Usability, 16 Accessibility, 3 Content and 10 Gamification requirements, 6 of which are shown in Figure 2. Interestingly, in our qualitative research with persons from this cohort, we observed their ability to use games as a means to find out and understand information. We investigated this further, which is why we have included gamification factors.
Figure 2: Six recommendations which can be used in the development of software for Persons with Mild Intellectual and Developmental Disability.
This is a guest blog post by Prof Ita Richardson who visited us in March 2023. Professor Ita Richardson comes from the Department of Computer Science and Information Systems, Lero – the Science Foundation Research Centre for Software and Health Research Institute/Ageing Research Centre, University of Limerick, Ireland
Bilal Ahmad, Ita Richardson and Sarah Beecham, Usability Recommendations for Designers of Smartphone Applications for Older Adults: An Empirical Study, in Software Usability, edited by Castro, L & Cabrero, D & Heimgärtner, R, InchtechOpen, DOI: 10.5772/intechopen.96775, ISBN 978-1-83968-967-3
Ahmad, Bilal, Sarah Beecham, Ita Richardson, The case of Golden Jubilants: using a prototype to support healthcare technology research, Workshop on Software Engineering & Healthcare, co-located with International Conference on Software Engineering, 2021, 24th May, 2021.
Alshammari, Muneef, Owen Doody and Ita Richardson, 2020, August. Software Engineering Issues: An exploratory study into the development of Health Information Systems for people with Mild Intellectual and Developmental Disability. In 2020 IEEE First International Workshop on Requirements Engineering for Well-Being, Aging, and Health (REWBAH) (pp. 67-76). IEEE, 31st August.
Ahmad, Bilal, Ita Richardson and Sarah Beecham, 2020. A Multi-method Approach for Requirements Elicitation for the Design and Development of Smartphone Applications for Older Adults. In 2020 IEEE First International Workshop on Requirements Engineering for Well-Being, Aging, and Health (REWBAH) (pp. 25-34). IEEE, 31st August.
Alshammari, Muneef, Owen Doody and Ita Richardson (2020). Health Information Systems for Clients with Mild Intellectual and Developmental Disability: A Framework, Proceedings of the 13th International Joint Conference on Biomedical Engineering Systems and Technologies Volume 5: HEALTHINF, 24-26 February 2020, Valletta, Malta pp 125-132 ISBN: 978-989-758-398-
Ahmad, B., Richardson, I., McLoughlin, S. and Beecham, S., 2018, July. Assessing the level of adoption of a social network system for older adults. In Proceedings of the 32nd International BCS Human Computer Interaction Conference 32 (pp. 1-5)
Alshammari, Muneef, Owen Doody and Ita Richardson (2018). Barriers to the Access and use of Health Information by Individuals with Intellectual and Developmental Disability IDD: A Review of the Literature. IEEE 6th International Conference on Healthcare Informatics (ICHI), pp. 294-298, New York, USA, 4-7th June, DOI:10.1109/ICHI.2018.00040
Ahmad Bilal, Richardson Ita, Beecham Sarah (2017). A Systematic Literature Review of Social Network Systems for Older Adults. In: Felderer M., Méndez Fernández D., Turhan B., Kalinowski M., Sarro F., Winkler D. (eds) Product-Focused Software Process Improvement. PROFES 2017. Lecture Notes in Computer Science, vol 10611 pp 482-496, Springer, Cham https://doi.org/10.1007/978-3-319-69926-4_38.
Nya teknologier som automatisering, robotik och artificiell intelligens (AI) har medfört betydande förändringar på arbetsplatser, och hur dessa förändringar påverkar arbetarnas hälsa och säkerhet är fortfarande i stort sett okänt. Detta gäller särskilt inom flygbranschen, där markpersonalen står inför en betydande teknologisk förändring. Covid-19-pandemin har ytterligare komplicerat situationen genom att leda till en brist på personal och sätta fokus på att skapa attraktiva och hälsosamma arbetsplatser för att behålla personalen. Men införandet av nya teknologier kan också medföra nya och oväntade problem
För att adressera dessa kunskapsluckor har ett forskningsprojekt startats för att undersöka hur nya teknologier påverkar arbetsmiljön för markpersonalen inom flygbranschen. Projektet syftar till att öka kunskapen om hur nya teknologier har påverkat och fortsatt kommer att påverka arbetsmiljön för bagagehanterare, flygplatsmekaniker och bränslepersonal. Dessa grupper har valts på grund av det stora kunskapsgapet inom deras områden och personalbristen, vilket ökar vikten av deras hälsa och säkerhet.
Projektet kommer att samarbeta nära med flygbranschen och Transportfackens Yrkes- och Arbetsmiljönämnd (TYA), som ägs av arbetsmarknadens parter, Svenska Transportarbetareförbundet och Transportföretagen. Båda organisationerna har uttryckt ett behov av denna kunskap för att förebygga problem med arbetsmiljön. Projektet kommer också att undersöka hur forskningsresultat om arbetsmiljön kan implementeras och utnyttjas effektivt.
Projektet har två huvudmål. Det första är att undersöka nivån på införandet av nya teknologier som påverkar bagagehanterare, flygplatsmekaniker och bränslepersonal på Sveriges 38 flygplatser och planerna för digitalisering framåt. Detta ger en baslinje för det andra målet, som är att undersöka hur nya teknologier har påverkat arbetsmiljön och ökat risken för arbetsrelaterade skador.
Projektet adresserar en kritisk kunskapslucka inom området nya teknologier och arbetsmiljö och är väl anpassat till AFA:s (Arbetsmiljöfonden) mål inom detta område. Projektet kommer inte bara att öka kunskapen om effekterna av nya teknologier på flygplatspersonalens arbetsmiljö, utan det kommer också att stödja arbetsmarknadens parter i deras förebyggande arbete gällande framtida arbetsmiljörisker.
New technologies, such as automation, robotics, and artificial intelligence (AI), have brought significant changes to the workplace, and the impact of these changes on workers’ health and safety remains largely unknown. This is especially true in the aviation industry, where ground personnel face a significant technological shift. The COVID-19 pandemic has further complicated the situation, leading to a shortage of personnel and putting the focus on creating attractive and healthy workplaces to retain staff. However, implementing new technologies can also bring new and unforeseen problems where using a hand scanner causes pain and discomfort for airport personnel.
To address these knowledge gaps, a research project has been initiated to investigate the impact of new technologies on the work environment of ground personnel in the aviation industry. The project aims to increase knowledge of how new technologies have affected and will continue to affect the work environment of baggage handlers, airport technicians, and refueling staff. These groups have been chosen because of the large research gap in their areas and the shortage of personnel, which increases the importance of their health and safety.
The project will work closely with the aviation industry and the Swedish Transport Workers’ Union (TYA), owned by labor market parties, the Swedish Transport Workers’ Union, and Transport Companies. Both organizations have expressed a need for this knowledge to prevent workplace health and safety problems. The project will also examine how research results on the work environment can be implemented and utilized effectively.
The project has two main objectives. The first is to investigate the level of implementation of new technologies that affect baggage handlers, airport technicians, and refueling staff across Sweden’s 38 airports and the plans for digitization moving forward. This will provide a baseline for the second objective, examining how new technologies have affected the work environment and increased the risk of workplace injuries.
The project addresses a critical knowledge gap in the area of new technologies and work environment and is well-aligned with the goals of the AFA (Arbetsmiljöfonden) within this area. The project will provide increased knowledge of the effects of new technologies on airport personnel’s work environment and support labor market parties in working preventatively regarding future workplace health and safety risks.
“Have you ever felt like a robot at work? Well, with the rise of AI and automation, this feeling might become even more common. That’s why we’re launching a research project to investigate how AI@work affects work engagement.”
The words above are how the renowned AI tool chatGPT summarises our new research project which aims at investigating how increased AI/automated-supported work influence work engagement. Alarming reports show that negative work-related emotions are climbing and we ask if increased AI and automation has a role in this.
The use of robots, automation, and AI in the workplace has become increasingly common in recent years. While there has been a significant interest in the technical development, the impact of these technologies on work engagement is not well understood. Similarly, we see multiple theories on work engagement in psychological and organisational research, but these say very little about the digital aspects of a workplace. How may technology affect a sustainable and productive state where employees are present and truly engaged in their work tasks? In our newly initiated research project, we intend to address this knowledge gap.
With a human-centred approach and research methods such as ethnographic field studies and interviews, we will conduct studies across three different sectors: the IT sector, the agricultural sector, and the metallic industry sector. These three sectors have been selected to ensure a broad context for our research. At a first glance, the sectors might seem to have nothing in common but they are all currently exposed to automation and AI, although the technology has completely different purposes across the sectors. For example, the use of AI tools in programming, or automated milking systems used in the agricultural sector.
What we learn from studying AI and automated-supported work and its influence on work engagement in different sectors will be used to develop a theoretical framework. This framework aims to be a useful tool for organisations to embrace opportunities while mitigating risks related to increasingly digital work environments and work engagement. Accordingly, we intend for the project to have both scientific, practical, and societal impact and look forward to continuing blogging about updates as the project progresses.
Recently there has been an open letter from a number of AI-experts advocating a pause in the development of new AI agents. The main reason for this is the very rapid development of chatbots based on generative networks, e.g., chatGPT and Brad, and a large number of competitors still in the starting blocks. These systems are now also publicly available at a fairly reasonable cost. The essence in the letter is that the current development is too fast for society (and humanity) to cope with it. This is of course an important statement, although we already have the social media, which when used in the wrong way has a serious impact on people in general (such as promoting absurd norms of beauty, or dangerous medical advice spreading in various groups).
The generative AI systems that are under discussion in the letter will undoubtedly have an impact on society, and we are definitely also taken by surprise in many realms already. Discussions are already here on how to “prevent students from cheating on their examinations by using chatGPT (see my earlier post about this here). The problem in that case is not the cheating, but that we teach in a way that makes it possible to cheat with these new tools. To prohibit the use is definitely not the right way to go.
The same holds for the dangers pointed to by the signers of the public letter mentioned above. A simple voluntary pausing of the development will not solve the problem at all. The systems are already here and being used. We will need to see other solutions to these dangers, and most important of all, we will need to study what these dangers really are. From my perspective the dangers have nothing to do with the singularity, or with the AI taking over the world, as some researchers claim. No, I can see at least two types of dangers, one immediate, and one that will/may appear within a few years or a decade.
Fact or fiction?
The generative AI systems are based on an advanced (basically statistical) analysis of a large number of data, either texts (as in chatBots), or pictures, as in AI art generators, (such as DALL-E or MidJourney). The output from the systems has to be generated with this data as a primary (or only) source. This means that the output will not be anything essentially new, but even more problematic, the models which are the kernel of the systems are completely non-transparent.Even if it is possible to detect some patterns in the in- and output sequences, it is quite safe to say that no human will understand the models themselves.
Furthermore, the actual text collections (or image bases, but I will leave these systems aside for a coming post) on which the systems are based, are not available to the public, which causes the first problem. We, as users, don’t know what the source of a certain detail of the result is based on, whether it is a scientific text or a purely fictitious description in a sci-fi novel. Any text generated by the chatBot needs to be thoroughly scanned with a critical mind, in order not to accept things that are not accurate (or even straightforwardly wrong). Even more problematic is that these errors are not the ones that may be simple to detect. In the words of chatGPT itself:
GPT distinguishes between real and fictitious facts by relying on the patterns and context it has learned during its training. It uses the knowledge it has acquired from the training data to infer whether a statement is likely to be factual or fictional. However, the model’s ability to differentiate between real and fictitious facts is not perfect and depends on the quality and comprehensiveness of the training data.
And the training data we know very little about. The solution to this problem is most of the time addressed as “wait for the next generation”. The problem here is that the next generation of models will not be more transparent, rather the opposite.
So, how is the ordinary user, who is not an expert in a field, supposed to be able to know whether the answers they get are correct or incorrect? For example, I had chatGPT producing two different texts; one giving the arguments that would prove God’s existence, and one that gave the arguments that would prove that God does not exist. Both versions were very much to the point, but what should we make of it? Today, when there are many topics that are the subjects of heated debates, such as the climate crisis, the necessity of vaccinations, etc., this “objectivity” could be very dangerous if it is not used with a fair amount of critical thinking.
Recursion into absurdity – or old stuff in new containers?
As mentioned above, the models are based on large amounts of texts, so far mostly produced by humans. However, today there is a large pool of productivity enhancers that provide AI support for the production of everything, from summaries to complete articles or book chapters. It is quite reasonable to assume that more and more people will start using these services for their own private creations, as well as, hopefully with some caution as per the first problem above, in the professional sphere. We can assume that when there is a tool, people will start using it.
Now, as more and more generated texts will appear on the public scene, it will undoubtedly mix in with the human-created text masses. Since the material for the chatBots needs to be updated regularly in order to keep up with the developments in the world, the generated texts will also slowly but steadily make their way into the materials and in the long run be recycled as new texts adding to the information content. The knowledge produced by the chatBots will be more and more based on the generated texts, and my fear is that this will be a very rapidly accelerating phenomenon that may greatly affect the forthcoming results. In the long run, we may not know whether a certain knowledge is created by humans or by chatbots that generate the new knowledge from the things we already know.
This recursive loop of traversing the human knowledge base mixed with the results from the generative AI-systems may not be as bad as might be considered, but it might also lead to a large amount of absurdity being produced as being factually correct knowledge. In the best case, we can be sure that most of the generated texts in the future will consist of old stuff being repackaged into new packages.
So, what are my conclusions from this? Should we freeze the development of these systems, as proposed in the open letter? We could, but I do not think that this will solve any problems. We have opened the box of Pandora, and the genie is already out of his bottle. In my perspective, the issue is more on learning how to use this knowledge in order to have it work for us in the best way. Prohibitions and legal barriers have never proved to stop people from doing things. The solution is instead the promotion of knowledge, not least to the main sources of education, and I do not just mean the schools and universities, but journalists and writers in general, as well as people who will be using this.
Already with social media, the problem with “fake news” and “fake science” has been a big problem, but as long as people will regard information from external sources (such as social media, Google searches, facebook or reddit groups, and now chatBots) as truths, and swallow the information from these as plain truths, we can pause the development of GPTs as much as we like and the problem will not go away. We started on this path already with the fast developments of social media, and it will not go away just because we cover our eyes.
So, I urge you as a reader to read this article with a critical mind, and don’t just believe everything that is written here. You know, I just might be completely wrong about this, myself.
In our research group, we study the relationships and dynamics of Human, Technology, and Organisation (HTO) to create knowledge that supports sustainable development and utilization of ICT.