Category: English (Page 7 of 10)

Celebrating One Year of HTO Research Group Blogging: A Recap

As the year ends and the holiday season is upon us, we want to take a moment to wish all our readers a Merry Christmas and a Happy New Year! It’s been an incredible journey for the HTO Research Group blog, and as we celebrate one year of sharing our research insights and findings with you, we’d like to reflect on the events and articles we’ve covered over the past year.

In the past year, we have published 40 blog posts covering various topics in human-computer interaction, technology, and work environment. We’ve shared our research findings, insights, and experiences with you, our readers.

Highlights from the Past Year

Vision Seminars: Pioneering User-Centric IT Design

Our commitment to user-centric IT design was highlighted in a blog post by Åsa Cajander. She discussed the long-standing tradition of conducting Vision Seminars within our research projects, showcasing how this innovative approach has shaped our engagement with technology and work systems design.

AI for Humanity and Society 2023 Conference

In November, our blog covered the annual conference on “AI for Humanity and Society 2023 and Human Values” held in Malmö. Andreas Bergqvist provided an insightful recap of the conference, which featured three keynotes and panels addressing critical issues surrounding AI’s impact on society. The discussions delved into criticality, norms, and interdisciplinarity.

AI4Research Fellowship 2024

A significant milestone was achieved when Åsa Cajander announced her participation in the AI4Research Fellowship. This five-year initiative at Uppsala University aims to advance AI and machine learning research, and we are honored to be part of it.

Exciting New EDU-AI Project

We also announced the commencement of a new research project, the EDU-AI project, which explores the transformative impact of generative AI technologies on education. Starting in April 2024, this project will address critical issues related to digital workplace health and usability in IT systems.

Exploring the Future of Healthcare: Insights from MIE’2023 and VITALIS 2023

Sofia shared insights from two significant healthcare conferences, MIE’2023 and VITALIS 2023. The blog post explored the advancements and challenges in healthcare, focusing on AI’s role in shaping the future of healthcare.

Empowering People with Anxiety: Biofeedback-based Connected Health Interventions

Sofia explored the growing issue of anxiety in today’s fast-paced world and introduced biofeedback-based connected health interventions as a potential solution. The blog post highlighted the significance of connected health approaches in addressing anxiety.

TikTok – What is the Problem?

Lars addressed the concerns surrounding the social media platform TikTok, particularly from a security perspective. He discussed the potential dangers and implications of TikTok’s usage, emphasizing the need for awareness and caution.

Writing Retreat with the HTO Group

Rebecca Cort shared the HTO research group’s tradition of hosting writing retreats and discussed the importance of creating dedicated time and space for focused writing. The blog post highlighted the group’s commitment to productive research.

Insightful Publications

Throughout the year, we shared several research papers and publications, each providing valuable insights into various aspects of technology and its impact on our lives. From the effects of AI on work engagement to the challenges faced by caregivers of cancer patients, our research has covered a wide range of topics.

Looking Ahead

As we enter the new year, we are excited about the continued growth of the HTO Research Group blog. We have more research findings, insights, and events to share with you, and we look forward to engaging with our readers in meaningful discussions.

We wish you a joyous holiday season, and may the new year bring you happiness and discoveries.

Happy Holidays and a Prosperous New Year from the HTO Research Group!

HTO Coverage: AI for Humanity and Society 2023 and human values

Mid of November in Malmö, WASP-HS held its annual conference on how AI affects our lives as it becomes more and more entwined in our society. The conference consisted of three keynotes and panels on the topics of criticality, norms, and interdisciplinarity. This blog post will recap the conference based on my takeaways regarding how AI affects us and our lives. As a single post, it will be too short to capture everything that was said during the conference, but that was never the intention anyway. If you don’t want to read through this whole thing, my main takeaway was that we should not rely on the past through statistics and data with its biases to solve the problems of AI. Instead, we should, when facing future technology, consider what human values we want to protect and consider how that technology can be designed to help and empower these values.

The first keynote was on criticality and by Shannon Vallor. It discussed metaphors for AI, she argued for the metaphor of mirrors instead of the myth that media might portray it as. We know a lot about how AI work, it is not a mystery. AI is technology that reflects our values and what we put into it. When we look at it and see it as humane, it is because we are looking for our reflection. We are looking for us to be embodied in it and anything it does is built on the distortion of our data. Data that is biased and flawed, mirroring our systematic problems in society and its lack of representation. While this might give of an image of intelligence or empathy, that is just what it is: an image. There is no intelligence or empathy there. Only the prediction of what would appear empathical or intelligent. Vallor likened us to Narcissus, caught in the reflection of ourselves that we so flawedly built into the machine. Any algorithm or machine learning model will be more biased than the data it is built on as it draws towards the norms of biases in the data. We should sort out what our human morals are and take biases into account in any data we use. She is apparently releasing a book on the topic of the AI metaphor and I am at least curious to read it after hearing her keynote. Two of the points that Vallor ends on is that people on the internet usually have a lot to say about AI while knowing very little and that we need new educations which teach what is human-centered so that it does not get lost between the tech that is pushed.

The panel on criticality was held between Airi Lampinen, Amanda Lagerkvist, and Michael Strange. Some of the points that were raised related to that we shouldn’t rush technology, that reductionistic view of a lot of the industry will miss the larger societal problems, novelty is a risk, we should worry about what boxes we are put into, and what human values do we want to preserve from technology? Creating new technology just because we can is not the real reason, it is always done for someone. Who would we rather it was for? Society and humanity perhaps? The panelists argued it would be under the control of the market forces without interventions and stupid choices are made because they looked good at the time.

The second keynote on norms and values was by Sofia Ranchordas who discussed the clash between administrative law, which is about protecting individuals from the state, and digitalization and automation, which builds on statistics that hides individuals categorised into data and groups, and the need to rehumanize its regulation. Digitalization is designed for the tech-savvy man and not the average citizen. But it is not even the average citizen that needs these functions of society the most, it is the extremes and outliers and they are even further from being tech-savvy men. We need to account for these extremes, through human discretion, empathy, vulnerability, and forgiveness. Decision making systems can be fallible, but most won’t have the insight to see it. She ended on that we need to make sure that technology don’t increase the asymmetries of society.

The panel that followed consisted of Martin Berg, Katja De Vries, Katie Winkle, and Martin Ebers. The presentations of the participants raised topics such as why do people think AI is sentient and fall in the trap of antromorphism, that statistics cannot solve AI as they are built on different epistemologies and those that push it want algorythmic bias as they are the winners of the digital market, the practical implications of limits of robots available at the market to use in research, and the issues in how we assess risk in regulation. The following discussion included that law is more important than ethical guidelines for protecting basic rights and that it both too early to regulate and we don’t want technology to cause actual problems before we can regulate it. A big issue is also the question if we are regulating the rule-based systems we have today or the technological future of AI. It is also important to remember that not all research and implementation of AI is problematic, as lot of research into robotics and automation is for a better future.

The final keynote was by Sarah Cook and about the interdisciplinar junction between AI and art. It brought up a lot of different examples of projects in this intersection such as: Ben Hamm’s Catflap, ImageNet Roulette, Ian Cheng’s Bad Corgi, and Robots in Distress to highlight a few. One of the main points in the keynote was shown through Matthew Biederman’s A Generative Adversarial Network; generative AI is ever reliant on human input data as it implodes when endlessly being fed its own data.

The final panel was between Irina Shklovski, Barry Brown, Anne Kaun, and Kivanç Tatar. The discussion raised topics as questioning the need of disciplinarity and how you deal with conflicting epistemologies, the failures of interdisciplinarity as different disciplines rarely acknowledges each other, how do you deal with different expectations of contributions and methods when different fields met, and interdisciplinar work often ends up being social scientific. A lot of, or most work, in HCI tends to end up in some regard to be interdisciplinary. As an example from the author, Uppsala University has two different HCI research groups, one at the faculty of natural sciences and one at the faculty of social sciences, while neither fits perfectly in. The final discussion was on the complexities of how to deal with interdisciplinarity as a PhD student. It was interesting and thought-provoking, as a PhD student in a interdisciplinary field, to hear the panelists and audience members bringing up their personal experiences of such problems. I might get back to the topic in a couple of years when my studies draws to a close to forward the favour and tell others about my experiences so that others can learn from them as well.

Overall, it was an interesting conference highlighting the value of not forgetting what we value in humanity and what human values we want to protect in today’s digital and automated transformation.

Vision Seminars: Pioneering User-Centric IT Design

Our research has proudly upheld a long-standing tradition of conducting Vision Seminars within the scope of action research projects. This innovative approach, predominantly led by Bengt Sandblad, has significantly shaped how we engage with technology and work systems design. Niklas Hardenborg’s doctoral thesis further exemplifies our commitment to this approach, which delves deeply into designing work and IT systems through participatory processes, with a strong focus on usability and sustainability.

Over the years, we’ve produced an impressive array of studies and papers demonstrating the diversity and depth of our engagement with Vision Seminars. Our works, authored by researchers like Åsa Cajander, Marta Larusdottir, Thomas Lind, Magdalena Stadin, Mats Daniels, Robert McDermott, Simon Tschirner, Jan Gulliksen, Elina Eriksson, and Iordanis Kavathatzopoulos, span a wide range of topics. These range from in-depth explorations of user involvement in extensive IT projects, as seen in our latest publication on vision seminars called “Experiences of Extensive User Involvement through Vision Seminars in a Large IT Project,” to more focused case studies in areas such as university education administration and the development of train driver advisory systems for improved situational awareness.

A key theme that runs through our studies is the vital role of users in shaping the future of technology and work practices. Papers like “The Use of Scenarios in a Vision Seminar Process” and “Students Envisioning the Future” underscore the proactive role of participants in moulding future digital work environments. Our approach is distinctively collaborative, inviting various stakeholders to craft visions guiding user-centred systems’ evolution.

Our research extends beyond examining specific sectors or systems. It addresses the larger methodological and organizational changes necessary to enhance usability and the digital work environment. “User-centred systems design as organizational change,” by Gulliksen and others, is a prime example of this broader view, embedding user-centred design into the very fabric of organizational processes and culture.

In summary, our body of work contributes significantly to the field of Human-Computer Interaction and sets a benchmark in involving users in the technological design process. Through Vision Seminars, we continue to champion a participatory, user-centred approach in systems design, aiming to create more usable, sustainable, and future-oriented IT systems and work practices. This commitment cements our position as pioneers in the field, constantly pushing the boundaries of how user involvement can shape the technological landscape.

Some of our Research Papers
on Vision seminars

Cajander, Å., Larusdottir, M.,
Lind, T., & Stadin, M. (2023). Experiences of Extensive User Involvement
through Vision Seminars in a Large IT Project. Interacting with
Computers
, iwad046.

Cajander, Å., Sandblad, B., Lind, T., Daniels, M., & McDermott, R.
(2015). Vision Seminars and Administration of University Education–A Case
Study. Paper! Sessions!!, 29.

Lind, T., Cajander, Å., Björklund, A., & Sandblad, B. (2020, October).
The Use of Scenarios in a Vision Seminar Process: The Case of Students
Envisioning the Future of Study-Administration. In Proceedings of the
11th Nordic Conference on Human-Computer Interaction: Shaping Experiences,
Shaping Society
 (pp. 1-8).

Lind, T., Cajander, Å., Sandblad, B., Daniels, M., Lárusdóttir, M.,
McDermott, R., & Clear, T. (2016, October). Students envisioning the
future. In 2016 IEEE Frontiers in Education Conference (FIE) (pp.
1-9). IEEE.

Cajander, Å., Sandblad, B., Lind, T., Daniels, M., & McDermott, R.
(2015). Vision Seminars and Administration of University Education–A Case
Study. Paper! Sessions!!, 29.

Tschirner, S., Andersson, A. W., & Sandblad, B. (2013). Designing train
driver advisory systems for situation awareness. Rail Human Factors:
Supporting reliability, safety and cost reduction. Taylor & Francis, London
,
150-159.

Tschirner, S., Andersson, A. W., & Sandblad, B. (2013). Designing train
driver advisory systems for situation awareness. Rail Human Factors:
Supporting reliability, safety and cost reduction. Taylor & Francis, London
,
150-159.

Gulliksen, J., Cajander, Å., Sandblad, B., Eriksson, E., &
Kavathatzopoulos, I. (2009). User-centred systems design as organizational
change: A longitudinal action research project to improve usability and the
computerized work environment in a public authority. International
Journal of Technology and Human Interaction (IJTHI)
5(3),
13-53.

Hardenborg, N. (2007). Designing work and IT systems: A
participatory process that supports usability and sustainability
 (Doctoral
dissertation, Acta Universitatis Upsaliensis).

Hardenborg, N., & Sandblad, B. (2007). Vision Seminars–Perspectives on
Developing Future Sustainable IT Supported Work. Journal of Behaviour
& Information Technology, Taylor & Francis
.

Olsson, E., Johansson, N., Gulliksen, J., & Sandblad, B. (2005). A
participatory process supporting design of future work.

New Publication: Shaping the Future of IT Projects: Insights from Vision Seminars

In the ever-evolving world of information technology, understanding and incorporating user needs has never been more crucial. This is the crux of a study titled “Experiences of Extensive User Involvement through Vision Seminars in a Large IT Project,” authored by Åsa Cajander, Marta Larusdottir, Thomas Lind, and Magdalena Stadin. Their research delves into the impactful role of Vision Seminars (VS) in steering large IT projects towards success.

Information about the paper:
Cajander, Å., Larusdottir, M., Lind, T., & Stadin, M. (2023). Experiences of Extensive User Involvement through Vision Seminars in a Large IT Project. Interacting with Computers, iwad046.
Found here.

A New Approach to IT Development

The digital landscape is complex and demands methods that consider the full spectrum of the user’s work environment. The study by Cajander and her colleagues focuses on the Vision Seminar process, a method designed to address future technology use in intricate digital work settings. Read more here. This approach is not just about technology; it’s about understanding how people interact with these systems in their daily work lives.

Revelatory Findings

The research revealed several key insights:

  • User-Centric Success: Participants in the Vision Seminars reported a newfound holistic understanding of their work. This broader perspective led to the discovery of more effective methods of support.
  • Feasibility of Future Visions: The study highlighted the participants’ belief in the practicality and desirability of envisioned future IT systems.
  • Integration Challenges: A notable revelation was the difficulty of embedding user-centric methods in fast-paced software development environments.

Methodology

The study’s mixed-methods approach, utilizing surveys and interviews, offered a rich, multi-dimensional understanding of the impact of Vision Seminars. This comprehensive method ensures robust findings and reflects diverse experiences and opinions.

Practical Applications for the Real World

What does this mean for the IT industry? The findings underscore the importance of involving users in developing IT systems. This involvement enhances user satisfaction and can also guide the direction of IT projects more effectively.

Addressing the Challenges

Despite the positive outcomes, the Vision Seminar process has challenges. The time and resources required for such extensive user involvement can pose significant difficulties in smaller or more technology-centric projects.

Concluding Thoughts

This study is crucial to our understanding of user involvement in IT development. It reinforces the notion that the future of IT systems must be shaped by those who use them, ensuring that technology serves people, not the other way around.

Acknowledgements

This research was made possible through the support of AFA.

AI4Research Fellowship 2024

I’m thrilled to announce an exciting new chapter in my career: I will join the AI4Research Fellowship next year! This five-year Uppsala University initiative is dedicated to advancing AI and machine learning research, and I’m honoured to be a part of it.

What is AI4Research?

AI4Research is a dynamic program that focuses on strengthening and developing AI research. During my sabbatical at Carolina Redivica, I will collaborate with fellow AI4 Research scholars, diving deep into the world of artificial intelligence. This AI4 Research Fellowship will also make it possible for my colleagues at the HTO group to join me and benefit from this environment. Hence, I will bring a team of young researchers and doctoral candidates, along with a more senior researcher, to investigate the effects of AI on the work environment.

My Project: AI’s Impact on the Work Environment

We’re entering a new era where AI is revolutionizing the workplace. My research will explore both AI’s positive and negative aspects in the work environment. I aim to identify potential risks and challenges and how AI can enhance work experiences and foster creativity and personal development in professional tasks.

Future Endeavors and Funding

During my sabbatical, I will seek additional research grants for future projects in AI and the work environment. Collaborations within AI4Research will allow us to create innovative projects addressing the challenges posed by AI’s influence on the work environment.

I am looking forward to my AI4Reseach Fellowship and a year full of new ideas and learning!

Post-Doctoral Opportunity in Exciting New EDU-AI Project

We are thrilled to announce the commencement of a new research project, “Adapting Computing Education for an AI-Driven Future: Empirical Insights and Guidelines for Integrating Generative AI into Curriculum and Practice” (The EDU-AI project), starting April 2024. This venture explores the transformative impact of generative AI technologies, such as GPT-4 and automated code generators, on the IT industry, computing education, and professional skills development.

The project, significant for the Department of Information Technology, aligns with our commitment to addressing challenges and capitalizing on opportunities presented by generative AI in education. Spearheaded by Åsa Cajander and Mats Daniels, the project will be conducted over two years (April 2024 – March 2026), potentially extending into a third year.

Project Overview

The EDU-AI project comprises four work packages, each targeting a unique aspect of the generative AI influence in IT and education:

  1. Understanding Generative AI in the Professional IT Landscape: Investigating the use of generative AI among IT professionals.
  2. Generative AI in Computing Education: Student Perspectives: Examining students’ interaction with and perception of generative AI.
  3. Faculty Adaptation and Teaching Strategies for Generative AI: Assessing how faculty integrate generative AI into their teaching methods.
  4. Synthesis and Recommendations for Competence Development in Computing Education: Creating actionable recommendations based on findings from the first three stages.

The project will collaborate with Auckland University of Technology, Eastern Institute of Technology, and Robert Gordon University, Aberdeen, bringing cross-cultural expertise and perspectives.

Benefits and Application Process

This position offers the opportunity to be at the forefront of AI integration in education, work with leading experts, and publish in top journals and conferences.

Interested candidates should submit their application, including a CV, cover letter, and relevant publications. For more information see: https://www.jobb.uu.se/details/?positionId=676633

Last application date: 2023-12-18

For more information about the project and the role, please refer to the detailed project description or contact Åsa Cajander or Mats Daniels directly.

Project Update: SysTemutvecklingsmetodeR för dIigital Arbetsmiljö (STRIA)

After several years of dedicated research and development, the SysTemutvecklingsmetodeR för dIigital Arbetsmiljö (STRIA) project is coming close. Led by Professor Åsa Cajander, working with Dr Magdalena Stadin and Professor Marta Larusdottir, this project has been a pioneering effort to address the critical issue of digital workplace health and usability in IT systems. The project was funded by AFA.

The Problem
In today’s fast-paced digital landscape, many IT systems fail to support efficient work processes, ultimately contributing to health issues within organizations. Research has highlighted a lack of focus on workplace health in current system development practices. There’s also a shortage of practical methods for incorporating a workplace health perspective into digitalization efforts.

The Mission
The STRIA project aimed to collaborate with IT developers to create effective and practical methods for designing sustainable digital work environments. This endeavor included promoting these methods, developing educational materials, and advocating for their adoption.

The Three Focus Methods
The project focused on three key methodologies:

Contextual Think Aloud Method: This method involves users verbalizing their thought processes while interacting with software, enabling evaluators to gain insights into user thinking.
Vision Seminars: Involving a group of evaluators who individually assess software using predefined heuristics, this method helps identify usability problems.
Contextual Personas Method: Originally introduced by Cooper (2004), this method creates hypothetical archetypes of real users, allowing for more targeted and empathetic system design.

Project Phases
The project followed a structured plan, as outlined in Figure 1, which included:

  1. Understanding Digital Workplace: Assessing challenges related to different IT systems and digital workplaces in healthcare and administrative settings.
  2. Developing System Development Methods: Crafting new methods for system development based on insights from previous phases.
  3. Creating Educational Materials: Developing materials to teach developers how to apply these methods effectively.
  4. Evaluation and Refinement: Testing and refining the methods with IT developers and gathering feedback.
  5. Dissemination of Results: Publishing research findings, articles, and blog posts to share the knowledge with the wider community.

Conclusion
As the STRIA project concludes, it leaves a legacy of knowledge, recommendations, and methodologies for assessing digital workplace aspects. The project’s findings has been shared through academic publications, industry-focused journals, conferences, blogs, and educational programs. Stay tuned for the final report and further updates on this important work.

New publication about the Human Contribution in Handling Unanticipated Events at Work

Railway tracks with a clearly visible overhead wire provide trains with electricity.

One early morning, a freight train got caught in the overhead electrical wire, causing a large traffic disruption which affected all train traffic in the area and resulted in delays and cancelled trains for almost 24 hours. This is what one of our most recent publications is about: the incident, the effects it caused on the traffic flow, and more specifically, how the situation was handled and solved from within the traffic control room.

The train traffic system, like most infrastructures in society, plays an important role in everyday life by facilitating a continuous flow of people and goods. What is unknown to many is the very large and complex organisation of work that lies behind a functioning train traffic system. In the Swedish context, the organisation of train traffic involves numerous stakeholders and one of the main actors is the traffic controllers. Although much less studied than traffic control for aviation, the tasks and responsibilities are very similar—as are the challenges. These challenges are very much characterised by the fact that the control task is done remotely from a centralised control room, and that the traffic controllers are dependent on train drivers and others situated along the railway to act as the ‘eyes and ears’ of the control room. These people, together with advanced technologies, make it possible for the control room to stretch out and reach through time and space, making coordination the core task of traffic control.

The publication reports on a unique case study in which an unexpected real-time incident is described and analysed as the situation unfolds. Most reports on accidents and incidents are conducted in retrospect, but not this case study as I happened to be present in the control room at the particular time when the incident took place.  Accordingly, this paper provides novel insights into how the incident was handled and with the use of participant observations and informal interviews, a rich understanding of the work practices was captured. The analysis resulted in a detailed description of the work in the control room which can be divided into three phases: grasping what has happened and the severity of the incident, handling the incident and the immediate effects it had on the traffic, and finally mitigating the long-term consequences of the incident as these affected the traffic for almost 24 hours.

The unfolding of the incident repeatedly revealed that the workers had to cope with challenges related to time and space and as a way to describe and understand this aspect of the work, we turned to a concept originally used in the agricultural, landscape, and geographical domains, namely the concept of ‘sense of place’. The concept describes a certain meaning and relationship beyond the mere spatial between humans and places. A place is thus conceptualised as a centre of cognitive, affective, or attitudinal meaning. Although not previously applied to control room research, this study shows that ‘sense of place’ is something the workers actively strive to develop and that supports them in handling the situation although they were 150 kilometres away from the situation they were to handle. In future work, we aim to continue to explore how, and to what extent, the ‘sense of place’ concept can aid a deepened understanding of the control room work.

For those interested in the details, you can find the full paper here.

Reference

Cort, R. & Lindblom, J. (2023). Sensing the Breakdown: Managing Complexity at the Railway. Culture and Organisation, DOI: 10.1080/14759551.2023.2266857.

Frontiers in Education Conference 2023

The education conference Frontiers in Education (FIE) 2023 was held in College Station, Texas. It is a quite large conference, and there were many tracks during the three days that the conference was on. College station is situated between Dallas and Houston, and it is a, well, lets say interesting city, and incorporates another apparently a bit older city, Bryant, in the close vicinity. There was not that much time for sight-seeing so it was mainly the road from the hotel to the conference centre that became the major view during the conference.

The conference offered a large number of very interesting presentations and I did in fact not sit through any bad or boring presentations, Before the main conference there was also one day of workshops, as usual so many that it was difficult to chose one of them. I attended one on inclusive mentoring, which was very inspiring as a supervisor/mentor in general . I am of course very happy to find that there were quite a few presentations, work shops and special sessions that dealt with the issue of inclusion of students on various levels.

The special session on “Disabled Students in Engineering” was held by four Ph.D. students, and was very well prepared, rendering lots of inspiration for the teaching. The organizers also shared very good working material, which can be reused, e.g., in course seminars (I have just started a 15 credit course on non-excluding design).

All in all, the conference felt well worth the effort and time spent. It is always a good feeling when you return home and feel inspired, and just long for putting all the experiences at work in your own teaching. I have already added several new ideas to the course, and I think that this will improve the course a lot.

Still, maybe the most inspiring part of the conference was the (positive and constructive) critique I received on my presentation and paper: “New Perspectives on Education and Examination in the Age of Artificial Intelligence,” which I almost wanted to retitle as “Old perspectives…” since it looks back at older forms of examination, where there was a closer connection between teacher and student. This closer connection and the way it is achieved does make it more difficult for the student to “cheat” or use the AI chat bots.

The picture shows An old Greek teacher and his student, probably discussing a difficult problem during the examination.
An old Greek teacher and his student discussing some interesting problem during the examination

This post is already long enough, so I will not present the paper in any more detail here, but should you want to have a copy of the paper, please contact me with an email. You are also free to comment/criticize this post in the comment section below.

It’s AI, but Maybe not What you Think!

Note: This is a long article, written from a very personal take on Artificial Intelligence.

The current hype word seems to be “Artificial Intelligence” or in its short form “AI. If one is to believe what people say, AI is now everywhere, threatening everything from artists to industrial workers. There is even the (in)famous letter, written by some “experts” in the field, calling for an immediate halt to the development of new and better AI systems. But nothing really happened after that, and now the DANGER is apparently hovering over us all. Or is it?

Hint: Yes, it is, but also not in the way we might think!

The term “Artificial Intelligence” has recently been watered out in media and advertisements, so that the words almost don’t mean anything anymore. Despite this, the common ideas seem to be that we should 1) either be very, very afraid, or 2) hurriedly adapt to the new technology (AI) as fast as possible. But why should we be afraid at all, and for what? When asked, people often reply that AI will replace everybody at work, or that evil AI will take over anything from governments to the world as a whole. This latter is of course also a common theme for science fiction books and movies. Still, neither of these are really good reasons to fear the current development. But in order to understand why this is so, we need to go back to the historical roots of Artificial Intelligence.

What do we Mean by AI, then?

Artificial Intelligence started as a discipline in 1956 during a workshop at Dartmouth College, USA. During the discipline development, a distinction between two directions formed, that between strong and weak AI. Strong AI aims at replicating a human type of intelligence, whereas weak AI aimed at develop computations methods or algorithms that made use of ideas gained from Human Intelligence (often for specific areas of computation). Neural networks are, for example, representative of the weak AI directions. Today, strong AI is also referred to as AGI (Artificial General Intelligence), meaning a non-specialized artificial agent.

But in the 1950:s and 1960:s computers were neither as fast, nor as large as the current computers, which at that time imposed severe limitations on what you were able to do within the field. A large amount of work was theoretical, but there were some interesting implementations, such as Eliza, SHRDLU and not least Conceptual Dependencies (I have chosen these three examples carefully, since each of these poses some interesting properties with respect to AI, and I will explain this in the following and then follow up on the introduction):

Conceptual Dependencies : Conceptual Dependencies is an example of a very successful implementation of an artificial system with a very interesting take on knowledge representation. The system was written in the programming language LISP, and attempted to extract the essential knowledge that hid in the texts. The result was a conceptual dependency network, that could then be used successfully to summarize news articles on certain topics (the examples were taken from natural disasters and airplane hijacking). There were also attempts to make the system produce small (children’s) stories. All in all, the problem was that the computers were too small to be practically useful.

SHRDLU : SHRDLU was a virtual system that worked by having a small robot manipulating geometric shapes in a 3D modelling world. It was able to reason about the different possible or impossible moves, for example, that it is not possible to put a cube on top of a pyramid, but it is OK to do the reverse. The problem with SHRDLU was that there were some bugs in the representation and the reasoning, which ended in that it was pointed out that the examples shown were most likely preselected and did not display any general reasoning capabilities.

Eliza : The early chatbot Eliza is probably most known as the “Computer Psychologist”. It was able to keep up a conversation with a human for some time, pretending to be a Psychologist and it did so well that it was enough to actually convince some people that it was a real Psychologist behind the screen. “But, hold it!” someone may say here, “Eliza was not a real artificial intelligence! It was a complete fake!”. And yes, you would be perfectly right here. Eliza was a fraud, not so that it wasn’t a computer program, but in that it was faking the “understanding” of what the user wrote. But this is exactly the point with mentioning Eliza here. A intelligence-like behaviour may fool many, even though it does not have any “intelligent system” under the hood.

What can we Learn from AI History?

The properties of these three historical systems that I would like to point to in more detail are as follows:

  • Conceptual dependencies: AI needs some kind of knowledge representation. At least some basic knowledge must be stored in some way as a basis for the interpretation of the prompts.
  • SHRDLU : An artificial agent needs to be able to do some reasoning about this knowledge. Knowledge representation is only useful if it possible to use it for reasoning and possible generation of new data.
  • Eliza : Not all AI-like systems are to be considered to be real Artificial Intelligence. In fact Joseph Weizenbaum created Eliza in order to prove exactly how easy it was to emulate some “intelligent behaviour”.

To start with these three examples also have one interesting common property, namely that they are transparent since the theory and implementations have been made public. This is actually a major problem with many of the current generative AI agents, since they are based on large amounts of data, the source listings of which are not publicly available.

The three examples above also point to additional problems with the generative modelling approaches to AI (those that are currently considered so dangerous). In order to become an AGI (artificial general intelligence) it is most likely that there needs to be some kind of knowledge base, and an ability to reason about this knowledge. We could in fact regard the large generative AI agents as very advanced versions of Eliza, in some cases also enhanced with search abilities in order to give better answers, but as a matter of fact they don’t really produce “new” knowledge, just phrases that are the most probable continuations of the texts in the prompts. Considering the complexity of languages this is in itself no small feat, but it is definitely not a form of intelligent reasoning.

The similarity to Eliza is increased by the way the answers are given to a person, in that they are given in a very friendly form, even having the system apologize when it is pointed out that the answer it has given is not correct. This conversational style of interaction can easily fool users who are less knowledgeable about computers that there is a genie in the system, which is very intelligent and (very close to) a know-it-all. More about this problem later in this post.

Capabilities of and Risks with Generative AI?

The main problem that has arisen is that the generative AI systems cannot produce any real novelties, since the answers are based on (in the best case, extrapolation of) existing texts (or pictures). Should they by any chance produce new knowledge, there is no way to know whether this “new knowledge” is correct or not! And here is where, in my opinion, the real danger with generative AI lies. If we ask for information we either get correct “old” information, or new information which we cannot know whether it is correct or not. And we are only given one single answer per question. In this sense the chatbots could be compared to the first version of Google search, which contained a button marked “I’m feeling lucky!”, an option which just gave you one single answer, and not as now hundreds of pages to look through.

Googles search page with the “I’m feeling lucky!”-button, which has now been removed.

The chatbots also provide single answers (longer of course), but in Eliza manner wrapped in a conversational style that is supposed to convince the user that the answer is correct. Fortunately (or not?), the answers are often quite correct, so they will in most cases provide both good and useful information. However, all results still need to be “proof-read” in order to guarantee the validity of the contents. Thus, the users will have to apply critical thinking and reasoning to a high extent when using the results. Paradoxically, the better the systems become, i.e., the more of the results that are correct, the more we need to check the results in detail, especially when critical documents are to be produced where an error may have large consequences.

Impressive systems for sure!

Need we be worried about the AI development, then? No, today there is no real reason to worry about the development as such, but we need to be more concerned about the common usage of the results from AI systems. It is necessary to make sure that the users do understand that chatbots are not intelligent in the sense we normally think. They are good at (re-)producing text (and images), which most of the time make them very useful supportive tools for writers and programmers, for example. Using the bots to create text that can be used as a base for writing a document or an article is one interesting example of where this kind of systems will prove to be very important in the future. It will still be quite some time before they will be able to write an exiting and interesting novel, without the input and revision of a human author. Likewise, I would be very hesitant about using a chatbot to write a research article, or even worse, a text book in any topic. These latter usages will definitely require a significant amount or proof-reading, fact-checking and not least rewriting, before being released.

The AI systems that are under debate now are still very impressive creations, of course, and they manifest a significant progress in the engineering of software. The development of these systems is remarkable, and they have a potential to become very important in society, but they do not really produce really intelligent behaviour. The systems are very good statistical language generators, but with a very, very advanced form of Eliza at the controls

The future?

Will there be AGI, or strong AI beings in the future? Yes, undoubtedly, but this will take a long time still (I am prepared to be the laughing stock in five years, if they arrive. And these systems will most likely be integrated with the generative AI we have to day for the language management. Still, we will most likely not be able to get there, as long as we forget about the use of some kind of knowledge network underneath. It might not be in the classic form as mentioned above, but it seems that knowledge and reasoning strategies, rather than statistical models has to form some kind of underlying technology.

How probable is this different development path leading to strong AI, or AGI systems? Personally, I think it is quite probable and it seems to be doable, but I am also very curious about how the development will proceed over time. I would be extremely happy if an AGI could be born in my life time (being 61 at the time of writing).

And hopefully these new beings will be in the shape of benevolent, very intelligent agents, that can cooperate with humans in a constructive way. I still have the hope. Please feel free to add your thought in the comments below.

« Older posts Newer posts »