Igår, den 28 april, uppmärksammade vi Arbetsmiljödagen – en dag som sätter fokus på vikten av en trygg, hållbar och välfungerande arbetsmiljö. Vad passade då bättre än att Jessica Lindblom, lektor vid Uppsala universitet, var inbjuden att presentera AROA-projektet vid Ergonomi och Human Factors Sällskapet Sveriges (EHSS) årsmöte vid Arbetsmiljö- och miljömedicin (AMM) i Uppsala?
AROA-projektet handlar om hur ny teknik som robotisering, automation och AI påverkar vårt digitala arbetsengagemang och finansieras av Afa försäkringar. På arbetsmiljödagen presenterade Jessica de kunskapsluckor som vi identifierat i forskningen om digitalt arbetsengagemang samt preliminära analyser av vår insamlade data från IT-, lantbruks- och tågsektorn.
EHSS är Sveriges ergonomiförening – en ideell organisation för alla som arbetar med, eller har intresse för, ergonomi och human factors. Föreningen fungerar som en mötesplats där människor från olika branscher och discipliner samlas för att utbyta kunskap och erfarenheter kring samspelet mellan människa, teknik och organisation.
Ergonomi, enligt den internationella definition som EHSS utgår från, är ett tvärvetenskapligt forsknings- och tillämpningsområde som handlar om att förstå och optimera helheten i människans interaktion med teknik och organisation. Målet är att förbättra både hälsa och välbefinnande samt höja prestandan i utformningen av produkter och system.
Genom sitt arbete erbjuder EHSS ett viktigt forum för kunskapsutbyte och samverkan. Här möts forskare och praktiker, tekniker och beteendevetare – alla med ett gemensamt mål: att skapa bättre och mer hållbara arbetsmiljöer.
Att AROA-projektet fick ta plats just på Arbetsmiljödagen känns därför både passande och inspirerande!
When we hear the word “handover,” we might think of a nurse passing on information at the end of a shift, or maybe a car switching from self-driving mode back to the driver. If you think more closely about it, you will see that handovers are everywhere around us — and that they are quietly shaping how we work, collaborate, and share responsibility with both people and technologies. In a recent publication, we discuss handovers with the goal to broaden how we think about them in the field of Human-Computer Interaction (HCI)—especially now that Generative AI (GenAI) is entering many workplaces. The paper is co-authored by Ece Üreten (University of Oulu), Rebecca Cort (Uppsala University), and Torkil Clemmensen (Copenhagen Business School).
Traditionally, HCI research has looked at handovers as the moment when control of a system, like a semi-autonomous car or a VR headset, passes from one person to another or from a machine to a human. However, handovers are more than just “passing the baton.” In today’s digital workplaces where AI, automation, and human collaboration blend, handovers are crucial moments of communication, coordination, and shared understanding. Accordingly, in this publication, we argue that handovers are more than technical events and should be viewed as complex socio-technical interactions involving people, technology, organizational structures, and tasks.
With GenAI tools becoming more widespread, the rules of handovers are changing. These tools can summarize data, generate content, and even adapt to different communication styles—all of which could make handovers smoother and more reliable. But GenAI also raises new questions such as: How should we design AI tools that understand the context of a handover? Can AI support empathy and human connection in teamwork? How do we make sure AI does not confuse, overwhelm, or mislead during critical transitions? To tackle these questions, we propose a research agenda focused on the following key dimensions: 1. Technology – What tools and formats make handovers effective? 2. Tasks – What exactly is being handed over, and under what conditions? 3. Actors – Who is involved? (It is not just humans anymore.) 4. Structure – How are handovers shaped by organizational rules and culture? 5. Cross-domain – How can we understand the distinctions of handovers across domains?
In this paper, we emphasize that handovers are not the same across domains and therefore, we call for cross-domain studies and more nuanced thinking about who (or what) is involved in these critical moment of communication, coordination, and shared understanding. The paper invites the HCI community to take handovers seriously, study them across different industries, and design future technologies with this human (and increasingly non-human) interaction in mind.
“Something is rotten in the state of … ” well, at least in Sweden, and at least when it comes to software systems. There are a large number of failures adding up over the years. I can understand the problem we had in the 1960s and the 1970s, and I can even understand the Y2K phenomenon (which actually seems to fade in even computer scientists’ collective memory). Before the year 2000 we didn’t understand that the software was going to be so well designed (!) that it would survive into a year, when the year “00” was not an error message, but an actual number.
However, if we go back through the very short history of computing, we find that there were a large number of failures already from the beginning. Not only with computers, but the machines that were connected to the computers. Just take the example of the first ATM machines that were introduced. For some reason people seemed to become very absent-minded all of a sudden. They started to leave their banking cards in the machine over and over again. When this issue was investigated more thoroughly, it became clear that it was necessary to change the order of two actions in order to make the system better in this respect. Instead of “get the money first” and “retrieve card after that”, the trick was to “retrieve the card first” and “then get the money”. As simple as that, right.
Now this is no longer a problem, since we have this “touch” authorisation of everything you do. Just touch your card or even the phone to a small sensor and everything is fine. But just before this became more common, there was a small period of time when the ticket machines for the car parking were exchanged to a new, safer model. Apparently, it was now a requirement from the EU that the card should remain inserted during the whole operation (it was in fact held there by some special mechanism). Guess what happened? A large number of people became amnesiac again, leaving their cards in the slot. But this was not a surprise to some of us. No, we knew that this were to happen. And of course there was a good reason for this problem to reoccur — THE GOAL of using THE MACHINE!
When you go to an ATM or go to a ticket machine for a car park, you have a goal with this – namely to get money, or pay the parking fee. The card we use to pay is not a central part (unless the account is empty or the machine is broken) of the task, it is not as important as getting the money, or getting the parking paid. We KNOW this since the 1970s. But today, it seems, it’s not the users who suffer from bad memory; it’s the software developers turn.
Today we know quite a lot about human factors. Human factors are not bad, on the contrary. We know when they can cause a problem, but we also have quite good knowledge about how we can use the human factors constructively, so that they help people to do the right thing more or less by default. This does not mean that there is a natural, or obvious way to do something, but if we take the human factors into considerations, we can in fact predict if something is going to work or not.
This sounds simple, of course, but the problem is not just to change the order of two actions. It means to understand what we are good at, and also when the human factors can lead us the wrong way.
But how can we know all this? By looking back!
It is by looking back on all the bad examples that we should be able to start to figure out what can go wrong. And the list of bad (failed) examples has not grown shorter over the years, even after the Y2K chaos (which in fact should be called the “centennium bug” rather than the “millenium bug”). In the late winter last year we had a new version of the millium bug (or a similar event, at least). Two of the major grocery chains in Sweden (and one in New Zeeland) could not get their cashiers to work. The system was not working at all. Since the Swedish government has stressed “the cashless society” so hard, it had the effect that a large number of customers was unable to do the shopping that day.
So, what was the problem? Well, from the millenium bug we know that years can be very tricky entities. They should have four numbers, otherwise there can be problems. So far so good! However, the developer of this financial system didn’t make a proper estimation of the importance of the years in the system. It turned out, in the end, that the day when the system stopped working, was a day that didn’t exist. At least it did not exist in the minds of the developers, when they designed the system. But now and then, February 29 does indeed exist. Almost (every) every leapyear is a leap year, where February has an extra day attached.
The question is how this kind of bugs can enter the development process? The thing is that we can study a large number of single anecdotical accounts without drawing any wider conclusions from the examples. But if we instead look at the failures from a human factors perspective, there are many conclusions that we can draw. Most of these are very easy to understand, but oh, so difficult they seem to be to actually make anything out of. In part 2 of this post, I will dive deeper into some of the anecdotal examples, and make an attempt to generalise the reasoning for future references. (to be continued…).
In our research group, we study the relationships and dynamics of Human, Technology, and Organisation (HTO) to create knowledge that supports sustainable development and utilization of ICT.