Tidigare under veckan var Niklas Humble inbjuden till Mittuniversitetet – och institutionen för kommunikation, kvalitetsteknik och informationssystem (KKI) – för att hålla en presentation på temat “AI i forskning och examination”. Passet varvades med forskning från EDU-AI-projektet vid Uppsala universitet, praktiskt arbete och diskussion.
Som en del i detta skulle deltagarna testa ett antal AI-verktyg, diskutera möjligheter och utmaningar, samt utveckla sina egna AI-strategier för forskning och högre utbildning. Arbetet var inspirerat av artikeln:
Humble, N. (2024). Risk management strategy for generative AI in computing education: How to handle the strengths, weaknesses, opportunities, and threats? International Journal of Educational Technology in Higher Education, 21. https://doi.org/10.1186/s41239-024-00494-x
Under passet, och den efterföljande diskussion, blev det uppenbart hur mycket vi kan lära av varandra när vi samarbetar och diskuterar över ämnesgränserna. Vad händer när datavetare diskuterar med journalister? Eller när ingenjörer samarbetar med arkivarier?
Vi är glada att meddela att vårt forskningsprojekt DIGI-RISK: Hot och trakasserier i digital vård – Riskfaktorer och riktlinjer för förbättring har beviljats finansiering av Forte inom utlysningen “Arbetslivets utmaningar 2025”.
Digitaliseringen av vården har gett stora möjligheter – men den har också medfört nya arbetsmiljörisker. I takt med att video- och chattkonsultationer blivit en allt vanligare del av vårdpersonalens vardag har också förekomsten av hot, trakasserier och gränsöverskridande beteenden ökat.
DIGI-RISK syftar till att:
identifiera riskfaktorer kopplade till digitala vårdmiljöer,
undersöka skillnader mellan olika vårdprofessioner och kontexter,
och tillsammans med vårdpersonal utveckla riktlinjer för hur vi kan skapa tryggare digitala arbetsmiljöer.
Projektet löper mellan 2025 och 2028 och genomförs av ett tvärvetenskapligt forskarteam med expertis inom människa-datorinteraktion, arbetsmiljöforskning och design. Projektledare är professor Åsa Cajander vid Uppsala universitet. Övriga forskare i projektet är Maral Babapour Chafi (Chalmers/ISM) och Magdalena Ramstedt Stadin (Uppsala universitet).
Projektets samhällsnytta är tydlig: en säkrare digital arbetsmiljö för vårdpersonal gynnar inte bara arbetsmiljön, utan även vårdkvalitet och patientsäkerhet. Resultaten kommer att spridas via vetenskapliga publikationer, populärvetenskapliga rapporter och i samverkan med vårdens aktörer.
Läs mer om projektets syfte, metoder och planerade aktiviteter i kommande inlägg här på bloggen!
Att patienter kan läsa sin journal online via 1177 Journalen har blivit en självklar del av svensk hälso- och sjukvård. Men hur upplever vårdpersonalen i primärvården denna förändring? I en ny studie i JMIR Medical Informatics undersöker Irene Muli, Åsa Cajander, Isabella Scandurra† och Maria Hägglund hur vårdpersonal upplevde implementeringen av journalöppning i en vårdcentral i Region Stockholm.
Ett system med både potential och problem
Personalen beskrev journalsystemet som både flexibelt och bristfälligt. Det uppfattades som tekniskt komplext, svårt att använda, och saknade tillräckliga funktioner för att skydda känslig information. Samtidigt upplevdes det som ett steg i rätt riktning för transparens och patientdelaktighet – särskilt när patienter kunde följa sin egen vård och komma bättre förberedda till besök.
Arbetsmiljön spelar stor roll
En viktig insikt i studien är att arbetsmiljön i primärvården påverkar hur systemet tas emot. Tidspress, hög arbetsbelastning och brist på stöd gör det svårt att hinna dokumentera korrekt och i tid. Dokumentationspraxis varierade dessutom mellan olika yrkesgrupper, och brister i struktur och kvalitet på anteckningar lyftes som en risk både för vården och för patienternas förståelse.
Blandade reaktioner bland vårdpersonalen
Vissa respondenter välkomnade öppenheten – andra var mer tveksamma. Det fanns oro för att patienter skulle misstolka information, bli oroade, eller ifrågasätta innehåll. Några anpassade sitt sätt att skriva journaler för att undvika konflikter, vilket riskerar att försämra den kliniska nyttan av anteckningarna. Samtidigt såg flera respondenter att patienterna kunde få bättre förståelse, känna sig tryggare och bli mer delaktiga i sin vård.
Implementeringen upplevdes som långsam och otydlig
En intressant del av studien handlar om själva införandet. Vårdpersonalen beskrev det som en lång och utdragen process med fragmenterad kommunikation. Många visste inte när systemet skulle aktiveras eller hur patienterna informerades. Några hade fått information genom tidigare arbetsplatser eller kollegor – men strukturerad utbildning saknades ofta. Flera uttryckte att de saknade förståelse för vad som faktiskt förändrades i praktiken.
Slutsatser
Studien visar att det inte räcker att implementera teknik – det behövs också organisatoriskt stöd, tydlig kommunikation och ett genomtänkt införande. För att digitala tjänster som journalöppning ska fungera i praktiken, krävs att både vårdpersonal och patienter får rätt förutsättningar och kunskap. Det behövs också bättre förståelse för hur arbetsmiljö, teknik och patientinteraktion samspelar i primärvårdens vardag.
Läs hela studien här: Muli I, Cajander Å, Scandurra I, Hägglund M (2025). Health Care Professionals’ Perspectives on Implementing Patient-Accessible Electronic Health Records in Primary Care: Qualitative Study. JMIR Medical Informatics, 13:e64982. https://medinform.jmir.org/2025/1/e64982 DOI: 10.2196/64982
I ett nytt forskningsprojekt riktar vi fokus mot en grupp som ofta hamnar utanför den digitala utvecklingen: kvinnor med erfarenhet av hemlöshet. Tillsammans med dessa kvinnor, civilsamhällesorganisationer och forskare från olika discipliner ska vi utforska hur digitala hälsotjänster kan utformas för att bättre stödja människor i utsatta och krispräglade livssituationer.
Projektet leds av Jenny Eriksson Lundström, och i teamet ingår även Sophie Gaber och Åsa Cajander. Tillsammans kombinerar vi kompetens inom informationssystem, vårdvetenskap, interaktionsdesign och inkluderande forskningsmetoder.
Under det kommande året kommer vi att arbeta med att vidareutveckla våra idéer, fördjupa samarbetet med målgruppen och skriva en större forskningsansökan. Målet är att skapa kunskap och lösningar som bygger på de erfarenheter och behov som kvinnor i hemlöshet själva lyfter fram – snarare än att försöka anpassa befintlig teknik i efterhand.
Vi hoppas kunna bidra till en mer rättvis och hållbar digital utveckling inom hälsoområdet.
Hur påverkar AI vår förståelse i utbildningssammanhang – och vad händer med lärandet när tekniken inte kan förklara sig?
I en ny artikel som jag skrivit tillsammans med Roger McDermott och Mats Daniels diskuterar vi hur generativ AI påverkar centrala pedagogiska begrepp som förklaring, förståelse och kompetens. Vi ser tydligt att många AI-verktyg, trots sin användbarhet, saknar det som ofta är avgörande i undervisning: möjligheten att förklara varför något är som det är.
I traditionell undervisning är förklaringar centrala – både för lärarens sätt att undervisa och för studentens sätt att visa förståelse. Men när tekniken blir ett svart låda, där svar ges utan transparens, kan själva lärandeprocessen påverkas negativt. Det gäller särskilt i STEM-ämnen där kausala samband är viktiga för att bygga en djupare förståelse.
Vi introducerar därför begreppet interrogabilitet – alltså AI-systemets förmåga att kunna besvaras och ifrågasättas. Det handlar om att bevara elevernas aktiva roll i lärandet, även när AI är en del av undervisningen. Vi menar att tekniken måste stödja dialog, kritiskt tänkande och reflektion – inte ersätta dem.
Ska AI bidra till utbildning på riktigt, behöver vi se bortom automatisering och istället ställa krav på system som stödjer förståelse. Det är först då AI kan bli ett verkligt pedagogiskt verktyg – snarare än ett snabbt genvägsverktyg.
In the previous post I wrote about how we seem to forget most of our history, when it comes to failed projects. Some projects will create working conditions that are similar to working in a very messy kitchen, where the fridges have stopped working ages ago, but nobody has noticed. The sad fact is that we already know some of the factors that will cause a project to fail, and we even know them far too well for it to be comfortable. Ken Eason wrote about the problem already in 1988, and unfortunately it is still possible to recognize several of the reasons he lists in many of the projects that have failed since then. In the following, I will use numbers to denote examples from some of the later software engineering failures as follows:
Millennium (An administrative system for hospitals and other medical units, failed).
Blåljus (An administrative system for the Police, failed)
Moving from Mellior to Cosmic (Two administrative systems for medical administration in region Gävleborg, running but with large problems)
Ladok (A joint administrative system for the academic studies, students and examiners, running and works after many small and large problems)
Nationella proven (the central administration system for national correlation tests for the Swedish education, was recalled two days before the day of the test).
The failures in these projects point to different, but related problems in the development and introduction of the systems. This list is by no means complete, but these systems display some of the well-known factors leading to failures or inconveniences. The failures could be quite easy to predict from what we know about human factors and experiences from earlier failed projects. It would be possible to write long reports about the reasons for failures of all these projects, but here I will just try to highlight some of the most evident of these.
What is the purpose?
The main document that guides the software development process is the requirements specification, which is a huge document, supposedly describing the complete functionality of the whole system to such an extent that we should be able to program the system starting from that base. This document is also normally the base for the contract between the stakeholders in the process. If a function is not in the requirements specification, it is not supposed to be there. Adding functionality outside of the requirements specification is a big no-no, just as if the functionality described by the document is missing in the system.
This sounds both great and solid, but there are some caveats already in the beginning of the process. The first is to get an overview over the complexity of the specification. For larger systems, this becomes an overwhelming task, that most humans will no longer be able to perform. However, there are already software tools that will help in the process, and I assume that this will be a task that can be well supported by systems based on artificial intelligence technology, since summarizing texts is what they are already supposed to be good at. But more crucial is that the requirements specifications, despite their complexity, will often still be incomplete to a certain extent. What is missing? Quite simply, we often spend very little time finding out the purpose, or the goal, of using the system for the end users. We can specify the central functionality to the extreme, but if we don’t know what the goal of using a system is this will still not make the system well designed. In some cases, there are also unspoken, tacit and missed requirements that will affect the usability of the final system.
The Goals – Not the tools
To make matters worse, most systems today do not have one single goal, but many, and sometimes even contradictory. An administrative system for the health services has one very clear overall goal, namely to store all the information about the patients in a secure, safe and still accessible manner. We may also have quite detailed requirements on security, about which items to store and how they need to be stored etc. But, the question is, do the requirements show the purpose of storing the data? Let us take the following example:
“An X-ray picture is taken of the knee of a patient. If the only purpose of taking the X-ray is to document the treatment of the patient, it might not matter so much if the X-ray image becomes cropped at the edges to fit the standard image size when it is saved in the journal system (1). But if the purpose instead is to make a diagnose, some small details around the edges might be very important. If the details are missing, then in the best case the surgeon only needs to order a retake of the image, but in the worst case, the doctor might not know about the cropping of the image and miss vital information for the further treatment of the patient. It appears that the quality of the storage of the data becomes very important for the health professionals using the system.“
More diffuse, albeit, obvious goals of a system, may not even be mentioned explicitly in the requirements. We can, for example, be sure that one of the main goals of introducing a (new) system is to make the work simpler, or at least more efficient for the users. Thus, if a previously simple note-taking task now requires more than twenty interactions with the system, this is definitely not supporting this indirect goal (1, 2, 4). In Ladok, entering the final grade for a course now has to pass over at least five different screens, where the final step forces the examining teacher to log out from the system and then back in again. This is stated to be for “security reasons”. It is difficult to understand how this can be regarded as “efficient”.
Furthermore, most people today will use some kind of password manager to save login identities and passwords, so that you don’t have to remember the login data. With this type of program activated, the user only has to press “Enter” one extra time, and you are once again logged in again. Where is the security in this extra set of button presses? And what are the users’ goals and tasks in all this? Logging in one extra time is definitely not part of these.
Open the door, Richard!
To make the general discussion a bit more clear, let’s take a side track over a very simple physical example that most people should recognize: “The door handling mechanism!” Normally this mechanism is referred to simply as “the door handle” (but there may also be a locking part). But a door handle can have many different shapes, from the round door knob, to the large push bar that stretches along the whole width of the door. Which design is the best? Someone might argue that the large push-bar is the best, since it allows for the use of both hands. Some might instead hold the aesthetic design for utterly important, proposing the polished door knob as their favorite.
The discussion often ends in a verbal battle about “Who is right here?”, and commonly people who have the HCI education in their backs will reply with the ID principle: “It Depends” (the principle holds that there is almost never a single truth to the question but that there are many factors that we need to contemplate before the design). This principle is of course one way to look at it, but if we consider a kitchen door, for example, this may not be the best place to use a polished door knob (as any chef or cook would immediately realize). A hand that has been handling different kinds of food will often be more or less covered in grease or even some remaining soap after washing. This will in turn make a door knob impossible to twist. Better then to use a regular door handle with the Archimedes’ levering mechanism (which also provides the necessary force for people with weak muscles, of course).
However, maybe we should look a bit further than to the best specific design of the door handle? How often have you seen someone just standing in front of the door, only twisting or applying force to the handle? Isn’t there something further involved in the action? What is the goal of using a door handle? If we think a bit further, the goal of using the door handle is most of the time to open or close the door! Right! Now we know enough then? Well, know, how often have you seen someone just opening and closing a door just for fun? OK, some children might think it’s a good way to annoy the parents, but apart from that? What is the purpose for opening or closing a door? Of course, it’s to go to the other side of the door opening, or to close it in order to stop someone or something from coming in or out. So, this is in fact (very close to) the final goal of using the door handle, to get out of or into a room or at least to get through the door opening. So, any solution that will support a user to handle the door in a way that achieves this goal will be acceptable, and there may even be some solutions that are really good (and not just usable).
Back on track… to the rotten parts…
Now, I assume that nobody would really forget that doors have the purpose mentioned above, but for other tasks it may not be so simple. In some cases the goals of using a system might not be so simple and clear. Even worse, we might forget that the same system may have different purposes depending on the user and his or her perspective. The main purpose of a system may be one thing, but for the individual user, the main purpose of using a system may be very different depending on the user role, the assigned tasks and many other things. And here comes the big problem: while we most of the time construct the system from the company or organizational perspective and the purpose of the system; its goal is quite well specified, the goals of its operators, the users, might be much less clear. And for the user it is not enough that the function is possible to use, it has to be better than the previous system or better than doing the task by hand (1, 2, 3).
It has to be better than the previous method…
This is where at least some of the problems with the software development failures is to be found. Usability is important, but the system also has to conform to the reality experienced by the users; it has to make their work more enjoyable, not more stressful or complicated. Just to give a few examples from failed systems:
The region health care in Gävleborg has now replaced the old system “Mellior” (which was in itself not exactly a very well-liked system), with a version of Cosmic (3). It would of course be expected that you replace a system with a better one? Unfortunately, the new system and not least the transfer from the previous system leaves a lot to desire. Some of these problems relate to specific work tasks, whereas others are affecting the more general aspects of the usage. At the units of child psychiatry, it was soon found that the system was not at all designed for their usage. For safety reasons, you are many times ordered to work in pairs on some patients, which turned out not to be possible to administer in the new system(3). There were also no default templates for the specific tasks in the units, and when asked they received the answer that the templates should arrive about two years (!) after the new system had been introduced. Until then, the notes and other information had to be handled “ad hoc”, using the templates that were aimed at other units.
After some “trying and terror” there were some more serious issues that were discovered. If the wrong command (that most of the personnel felt to be the most natural) was used, the patient records were immediately visible to anyone who had access to the system. Even worse, it also turned out that hidden identities were no longer… hidden. The names, personal numbers, addresses, telephone numbers and other sensitive data were all visible in plain sight (3). The same security and integrity problem was also found in the system for the administration of school tests (5). This happened although it would be quite natural to assume that there is a certain purpose of keeping people’s identities hidden and protected. Could it be because the specific requirements regulating the “hidden identity records” was forgotten or omitted?
Big Bang Introduction
One clear cause of system failure has been traced down to the actual introduction of the system in the workplace. Ken Eason (1988) wrote about the different ways a new system could be introduced. The most common was described under the quite accurate name “Big Bang Introduction”. It is also one of the most common ways we still do it. At a certain date and time, the new system is started and the old one is closed down. Sometimes the old system is still running since all existing data may not have been transferred to the new system. This is often not a surprise, since the transfer of data is often not regarded as “important”.
Data transfer
When the Cosmic system was introduced (3), the data was not transferred auztomatically, fortunately. Instead the data had to be transferred manually, but also with a certain additional extra work needed. The different data records had to be “updated” with a certain additional tagging system before being transferred, because otherwise all the records would be dumped together in an unordered heap of data. The unordered heap then had to be resorted again according to the previous, existing labels (which were in the records all the time).
The patient records are, among other things, also used for communication with the patients. However, it turns out that when messages are sent to patients through Cosmic, they don’t get sent at all, although the sending is acknowledged by the system. The messages can deal with anything, from calls for patient visits, information about lab results or information about therapeutic meetings. Now the medic personnel has to revert to looking up the addresses manually in the old system, and then send the messages to each patient directly.
Training
I already mentioned above that one reason for why a new computer system is being developed is to make the work more efficient. As we also found, the new systems not always flawless. But even if they had been flawless, there is the problem that the workplace can already be in an overstressed mode of working. The time needed for additional training in the new system is often not available when the new system is introduced. This means that either the personnel has to learn the new system on their spare time or at home after work, or not get enough training. In some cases (3, 5) the responsibility for the training is instead handed over to the IT-support groups, where this can become even worse, if the time of introduction is badly chosen.
Cosmic(3) was introduced in January, Ladok(4) was introduced in the middle of the fall term. Other systems have been introduced during summer, which might seem to be a good choice. However, December-January contain the Christmas/New Year breaks, where it is difficult to get enough personnel to manage normal work conditions. Summer holidays likewise. To imagine that it would be easy to get people to also train on the new systems during those times is of course ridiculous.
But mid-term? The time for the introduction of the Ladok(4) system was for some reason placed when the people at the student office has the most of things to do, namely when the results for all courses for the first part of the term are to be reported, in Ladok, and all with very short deadlines. This is again a recipe for a bad start on the use of a new system.
The fridge…?
If some food runs a risk of getting spoiled, we probably put it in the fridge, or even the freezer. But when software products run a risk of getting bad, where is the fridge or freezer? Well, the first thing we have to do, is to clean out the rotten stuff, even before we start finding ways of preserving the new projects that will be developed. Essentially, we have to start rethinking the usability requirements on the software we produce, and also look back, not only to what has worked before, but even more learn from the previous failures.
But most important is that we start to work with the Human Factors as guiding principles, and not just as “explanations for when people make fatal mistakes”. We know a lot about the human factors and how they shape our reactions. This post is already very long, so I guess I have to get back with part 3 of 2 dealing with these factors as part of the failed projects. While you are waiting I can recommend to take a look at the excellent book about “Human Errors” by James Reason (1990).
Illustrations are made by Lars Oestreicher, using MidJourney (v 6 and 7).
References
Eason, Ken (1988) Information Technology and Organisational Change. London ; New York: Taylor & Francis.
Reason, James. (1990) Human Error. New York, New York, USA: Cambridge University Press
This week, I attended the last session in a series on valorisation at Uppsala University, titled “DoResearchwithImpactinMind”. This seminar series turned out to be a very inspiring event and brought lots of ideas on how research can contribute to society, not just through publications but by becoming part of real-world solutions, services, and policies. The seminar series was organized by UU Innovation, a support function at Uppsala University that offers guidance and support for researchers to explore potential ways for their research to achieve societal impact.
The seminar focused on valorisation, described as the process of translating research and knowledge into societal or economic value. This could mean anything from influencing policy and public health to developing new technologies, services, or educational approaches. Valorisation is not just about commercialization but about recognising the broader potential of research to shape the society around us. This definitely broadened my initial view of the types of research that could create societal impact.
To me, one of the most interesting points from the seminar series was the “professor’s privilege,” which means that teachers at Swedish universities own the rights to their research results. Despite the name, this applies not only to professors but to all researchers at the university and means that we have the possibility to choose if, how, and when our results might be used outside academia. That is a powerful opportunity, but also a responsibility.
The key takeaways from the seminars were to plan for impact early in the research process and that societal impact can take many shapes and forms. Overall, it was an inspiring event that made me reflect on the broader potential of research and how impact can (and perhaps should) be part of our research process from the very beginning.
Under pandemin blev videomöten en naturlig del av vården. Men nu, när det digitala mötet inte längre är ett måste, väcks en ny fråga: vill patienterna fortsätta använda videokonsultationer även på längre sikt?
Detta är kärnfrågan i en studie som jag, Åsa Cajander, varit med och författat tillsammans med Irene Muli, Helena Hvitfeldt, Lovisa Jäderlund Hagstedt, Nadia Davoody, Marina Taloyan och Maria Hägglund. Vi undersöker vilka faktorer som påverkar långsiktig användning av videomöten i primärvården – och varför vissa patienter väljer att sluta använda dem.
Majoriteten vill fortsätta – men inte alla
Av de 451 patienter som deltog i vår undersökning uppgav 76 % att de gärna skulle vilja ha fler videomöten i framtiden. Men 24 % var mer tveksamma eller helt emot det. Vad avgör vem som vill fortsätta och vem som inte vill?
Vi såg att de som vill fortsätta oftare är i åldern 35–54 år, arbetar heltid och har en positiv inställning till tekniken. De använder även videomöten i andra sammanhang – till exempel i jobbet – och ser vårdens videomöten som en bekväm lösning när tiden är knapp.
Vad får människor att sluta?
De som inte vill fortsätta med videokonsultationer angav i första hand att de helt enkelt föredrar att träffa vårdpersonal ansikte mot ansikte. Detta gällde särskilt den yngsta åldersgruppen (16–34 år) – en något oväntad slutsats som pekar på att digitala vanor inte automatiskt innebär att man föredrar digital vård.
Andra skäl var att möten via video kändes mindre personliga eller att det var svårt att förstå vårdpersonalen. Intressant nog nämndes tekniska problem och användbarhet i mindre utsträckning än väntat.
En fortsatt digital klyfta?
Studien lyfter en oroande möjlighet: att både de yngsta och äldsta grupperna i befolkningen riskerar att halka efter i den digitala vårdutvecklingen. Vi pekar på att detta kan förstärka den redan existerande digitala klyftan. Erfarenhet, positiva attityder och känslan av att tekniken är frivillig verkar spela stor roll för om man vill fortsätta använda videomöten.
Vad betyder det här för framtidens vård?
Resultaten från vår studie visar att det inte räcker med att erbjuda tekniska lösningar – det krävs också förståelse för hur människor faktiskt upplever dem. Att skapa smidiga, användbara och personliga upplevelser är avgörande om videomöten ska bli en långsiktig del av vården.
Och kanske är det just där framtidens utmaning ligger: att hitta rätt balans mellan teknikens möjligheter och människors behov av närhet, förståelse och valfrihet.
When we hear the word “handover,” we might think of a nurse passing on information at the end of a shift, or maybe a car switching from self-driving mode back to the driver. If you think more closely about it, you will see that handovers are everywhere around us — and that they are quietly shaping how we work, collaborate, and share responsibility with both people and technologies. In a recent publication, we discuss handovers with the goal to broaden how we think about them in the field of Human-Computer Interaction (HCI)—especially now that Generative AI (GenAI) is entering many workplaces. The paper is co-authored by Ece Üreten (University of Oulu), Rebecca Cort (Uppsala University), and Torkil Clemmensen (Copenhagen Business School).
Traditionally, HCI research has looked at handovers as the moment when control of a system, like a semi-autonomous car or a VR headset, passes from one person to another or from a machine to a human. However, handovers are more than just “passing the baton.” In today’s digital workplaces where AI, automation, and human collaboration blend, handovers are crucial moments of communication, coordination, and shared understanding. Accordingly, in this publication, we argue that handovers are more than technical events and should be viewed as complex socio-technical interactions involving people, technology, organizational structures, and tasks.
With GenAI tools becoming more widespread, the rules of handovers are changing. These tools can summarize data, generate content, and even adapt to different communication styles—all of which could make handovers smoother and more reliable. But GenAI also raises new questions such as: How should we design AI tools that understand the context of a handover? Can AI support empathy and human connection in teamwork? How do we make sure AI does not confuse, overwhelm, or mislead during critical transitions? To tackle these questions, we propose a research agenda focused on the following key dimensions: 1. Technology – What tools and formats make handovers effective? 2. Tasks – What exactly is being handed over, and under what conditions? 3. Actors – Who is involved? (It is not just humans anymore.) 4. Structure – How are handovers shaped by organizational rules and culture? 5. Cross-domain – How can we understand the distinctions of handovers across domains?
In this paper, we emphasize that handovers are not the same across domains and therefore, we call for cross-domain studies and more nuanced thinking about who (or what) is involved in these critical moment of communication, coordination, and shared understanding. The paper invites the HCI community to take handovers seriously, study them across different industries, and design future technologies with this human (and increasingly non-human) interaction in mind.
“Something is rotten in the state of … ” well, at least in Sweden, and at least when it comes to software systems. There are a large number of failures adding up over the years. I can understand the problem we had in the 1960s and the 1970s, and I can even understand the Y2K phenomenon (which actually seems to fade in even computer scientists’ collective memory). Before the year 2000 we didn’t understand that the software was going to be so well designed (!) that it would survive into a year, when the year “00” was not an error message, but an actual number.
However, if we go back through the very short history of computing, we find that there were a large number of failures already from the beginning. Not only with computers, but the machines that were connected to the computers. Just take the example of the first ATM machines that were introduced. For some reason people seemed to become very absent-minded all of a sudden. They started to leave their banking cards in the machine over and over again. When this issue was investigated more thoroughly, it became clear that it was necessary to change the order of two actions in order to make the system better in this respect. Instead of “get the money first” and “retrieve card after that”, the trick was to “retrieve the card first” and “then get the money”. As simple as that, right.
Now this is no longer a problem, since we have this “touch” authorisation of everything you do. Just touch your card or even the phone to a small sensor and everything is fine. But just before this became more common, there was a small period of time when the ticket machines for the car parking were exchanged to a new, safer model. Apparently, it was now a requirement from the EU that the card should remain inserted during the whole operation (it was in fact held there by some special mechanism). Guess what happened? A large number of people became amnesiac again, leaving their cards in the slot. But this was not a surprise to some of us. No, we knew that this were to happen. And of course there was a good reason for this problem to reoccur — THE GOAL of using THE MACHINE!
When you go to an ATM or go to a ticket machine for a car park, you have a goal with this – namely to get money, or pay the parking fee. The card we use to pay is not a central part (unless the account is empty or the machine is broken) of the task, it is not as important as getting the money, or getting the parking paid. We KNOW this since the 1970s. But today, it seems, it’s not the users who suffer from bad memory; it’s the software developers turn.
Today we know quite a lot about human factors. Human factors are not bad, on the contrary. We know when they can cause a problem, but we also have quite good knowledge about how we can use the human factors constructively, so that they help people to do the right thing more or less by default. This does not mean that there is a natural, or obvious way to do something, but if we take the human factors into considerations, we can in fact predict if something is going to work or not.
This sounds simple, of course, but the problem is not just to change the order of two actions. It means to understand what we are good at, and also when the human factors can lead us the wrong way.
But how can we know all this? By looking back!
It is by looking back on all the bad examples that we should be able to start to figure out what can go wrong. And the list of bad (failed) examples has not grown shorter over the years, even after the Y2K chaos (which in fact should be called the “centennium bug” rather than the “millenium bug”). In the late winter last year we had a new version of the millium bug (or a similar event, at least). Two of the major grocery chains in Sweden (and one in New Zeeland) could not get their cashiers to work. The system was not working at all. Since the Swedish government has stressed “the cashless society” so hard, it had the effect that a large number of customers was unable to do the shopping that day.
So, what was the problem? Well, from the millenium bug we know that years can be very tricky entities. They should have four numbers, otherwise there can be problems. So far so good! However, the developer of this financial system didn’t make a proper estimation of the importance of the years in the system. It turned out, in the end, that the day when the system stopped working, was a day that didn’t exist. At least it did not exist in the minds of the developers, when they designed the system. But now and then, February 29 does indeed exist. Almost (every) every leapyear is a leap year, where February has an extra day attached.
The question is how this kind of bugs can enter the development process? The thing is that we can study a large number of single anecdotical accounts without drawing any wider conclusions from the examples. But if we instead look at the failures from a human factors perspective, there are many conclusions that we can draw. Most of these are very easy to understand, but oh, so difficult they seem to be to actually make anything out of. In part 2 of this post, I will dive deeper into some of the anecdotal examples, and make an attempt to generalise the reasoning for future references. (to be continued…).
In our research group, we study the relationships and dynamics of Human, Technology, and Organisation (HTO) to create knowledge that supports sustainable development and utilization of ICT.