Author: Andreas Bergqvist

HTO-Coverage: AHFE 2024 and a Nice Conference Presentation

Jonathan Källbäcker and I attended the Applied Human Factors and Ergonomics 2024 conference in Nice, France during the summer. The tasty seafood and baked goods found in the region aside, we were there to present a paper in the AROA-project. The conference was held at Campus St Jean at Université Côte d’Azur over four days and consisted of about 150 sessions of about five to seven presentations each split across 42 tracks and about 100 posters. Since it was not possible to attend all the sessions (I only got around to 12 of them), I chose to prioritize sessions related to work and AI.

Nice, France

This blog post consists of my pick of three highlights of the conference in the order of their sessions occurred, some issues with the conference format and how I would try to work around them if I organised a conference in the future, and a summary of the paper we presented on the conference. With a conference of this size, this will not be much more than a snapshot of it.

Highlights as an Attendee

During the session on Human factors in game design, Andrade (2024) presented on the placement of buttons in video game ads and the consequences of placing buttons outside of a handheld devices’ functional area. The functional area is often discussed as the space where buttons should be placed on handheld touch screens to make sure that they are reachable, but the design literature I have used in teaching rarely boggle down into the actual consequences of and reasons for placing buttons outside of it. The presentation discussed how this deliberate design choice makes the button hard to reach which increases the time needed to press the button and the duration of the ad that was viewed. It also causes a lot of strain and ads in free mobile games are common and often require multiple button presses. While the presentation was not related to AI or work, it was a highlight of the conference as it was an interesting and well-presented while contextualising and giving very practical and clear implications for a heavily discussed but maybe often-overlooked design principle.

The title of the session Individualization of services using Generative AI was a bit misleading as most presentations of the session were about the future of technology in some way. The session was packed with interesting topics, such as Grosch (2024) and Haase (2024) presented on future skills, Stübinger (2024) discussed how students would use generative AI in their work processes, Kröckel (2024) presented a literature review on how Generative AI can redefine emergency services, and Nhi et al. (2024) looked at awareness of environmental impact and willingness to reduce behaviours of video stream consumption. While the session overall gave a promising view of our adaption for the future, the presentation by Nhi et al. (2024) was one of the strongest presentations I attended as it conveyed its research gap and takeaway points very well, had a clear and visually interesting set of slides, while managing to portray their research in ways that anyone could understand.

Left: Stübinger (2024), Middle: Maibaum et al (2024), Right: Nhi et al. (2024)

The last highlight was the session Digital Dynamics in the Workplace, Exploring AI Integration, Flexible Work Models, and Participatory Design. The presenters were mainly from ifaa (it apparently translates to Institute for applied work science) in Germany and they were doing a lot of interesting research on work design and designing for work processes which turned out to be an interesting angle of approach compared to the workplace studies that we are doing in the research project I am in. The presentations included interesting things such as the need of workers in public administration (Maibaum et al. 2024), Socio-technical success factors for AI-based knowledge management (Reyes et al. 2024) and work design of chipping production (Weber et al 2024).

Conference Format Issues and Possible Workarounds

The main issues we experienced with the conference was due to the hybrid format and with the scheduling. The conference was hybrid and a lot of presenters were presenting and attending remotely. This was done through a video conference tool which a volunteer moderated by allowing participants to screen-share and such. We watched the keynote remotely on our way to the conference and but the conference call only had the slides being shared, not the audio so we didn’t hear what was said. Multiple attendees wrote about this in the chat but no one took note of it. When we arrived at the conference, we noticed the same issues taking place. The audio from the conference was often not turned on in the conference call and when it was, the microphones were not working well with the presentation set up and often cut out. Attendees wrote about it in the video call chat during those sessions as well, but the volunteers were not tasked with keeping up with the chat in the video call and often left the room during the presentation which made them not aware of the issues. By using proper microphones for such type of usage, having the volunteers be active in the video call chat, and preparing a cheat sheet for how to fix audio issues for the volunteers would have solved a lot of issues with the hybrid format and improved the participants experience of the conference.

The conference program

The other issue was that the scheduling of the sessions was at times uneven and the lack of breaks between the afternoon sessions caused sessions to run into the time slot for the following session. Each day was split into four time slots for sessions. The third time slot each day was only 60 minutes while the rest were 90 minutes. That being said, multiple sessions in the third time slot had 6 or 7 presentations of 10 minutes each while some of the 90-minute sessions only had 4 or 5. By either prolonging the 60-minute session to 90 minutes or by adjusting the number of presentations in those slots to fit within the time allotted to it, you would reduce the number of clashes and delays experienced. This is issue was further worsened by the lack of a break between the third and fourth time slot which made it impact the fourth slot even more. Adding a short break between those session would have reduced the amount of clashed and allowed people the time to move from one session to another.

We presented a paper

As you might be aware, we are working with a project on digital work engagement in which we are studying the impacts of automation, robotisation, and AI on work engagement in different domains with the aim to synthesise a framework on working towards digital work engagement. During the conference, we presented our initial findings from a workshop with our reference group regarding enabling technologies and work engagement (Bergqvist et al 2024). The members of the reference group got to discuss challenges and opportunities with enabling technologies within their domains as well as challenges and concerns regarding work engagement. Some of the main takeaways included that while the potential financial impacts of automation, robotisation, and AI in the work place are many and easy to imagine, we cannot forget about the concerns regarding sustainability, inequality, fear of lack of competence, and job displacement that exist in the work force. We need to continue to look into strategies to prevent discrimination induced and enforced by technology and continue to study the societal impacts that these technologies bring with them into the work place, society, and humanity.

We are also happy to say, if you haven’t heard about it yet, that we got the best paper award in the category of Challenges with AI at the Human Level.

The project is financially supported by Afa Försäkringar.

References

Andrade, W. M. (2024). Designing Mobile Game Input Unreachability: Risks When Placing Items Out of the Functional Area. Human Factors in Virtual Environments and Game Design, 126.
Bergqvist, A., Källbäcker, J., Cort, R., Cajander, Å., & Lindblom, J. (2024). Towards a framework for digital work engagement of enabling technologies. Artificial Intelligence and Social Computing, 257.
Grosch, C. (2024). Developing Future Skills through a Sequential Module Structure and Practical Orientation: A Case Study of the Bachelor Program in Applied Digital Transformation. Health Informatics and Biomedical Engineering Applications, 185.
Haase, S. (2024). Future Skills and (Generative) Al-New Era, New Competencies?. Health Informatics and Biomedical Engineering Applications, 178.
Kröckel, P. (2024). Redefining Emergency Services with Generative AI: Insights from a preliminary literature review. The Human Side of Service Engineering143(143).
Maibaum, M., Weber, MA, & Stowasser, S. (2024). Participatory Approaches to Design Work in the Context of Digital Transformation: An Analysis of the Needs of Employees in Public Administrations. Human Factors and Systems Interaction, 85.
Nhi, D. T. T., Chuloy, M., & Glomann, L. (2024). Environmental Impact of Video Streaming from Users’ Perspectives. Health Informatics and Biomedical Engineering Applications, 192.
Reyes, C. C., Ottersböck, N., Prange, C., Discher, A., Peters, S., & Dander, H. (2024). Technical and Socio-Technical Success Factors of AI-Based Knowledge Management Projects. Human Factors and Systems Interaction154(154).
Stübinger, J. (2024). Beyond Traditional Boundaries: The Impact of Generative Artificial Intelligence on Higher Education. Health Informatics and Biomedical Engineering Applications, 160.
Weber, J., Weber, MA, & Stowasser, S. (2024). Work design in production: Foundations and recommendations for the implementation of mobile, time-flexible work design in chipping production. Human Factors and Systems Interaction, 59.

Summering av referensgruppsmötet i AROA-projektet oktober 2024

Arbetet pågår för fullt i AROA-projektet vilket handlar om arbetsengagemang vid automatisering, robotisering, och artificiell intelligens (AI) inom tre skilda sektorer i arbetslivet: IT, tåg, samt lantbruk. AROA finansieras av Afa försäkringar och en del i projektets genomförande är att organisera regelbundna träffar med referensgruppen. För några veckor sedan var det återigen dags för avstämning och erfarenhetsutbyte mellan projektdeltagare och medlemmar i referensgruppen. Denna gången genomfördes träffen digitalt under en förmiddag

Det blev glada tillrop när vi inledde med att berätta att tankegångar och erfarenheter från vårt första referensgruppsmöte har bidragit till vårt uppmärksammade konferensbidrag på AHFE-konferensen i Nice, Frankrike tidigare i somras. Där erhöll vi utmärkelsen ”Best Paper Award” för publikationen ”Towards a framework for digital work engagement of enabling technologies”.


Nästa punkt på agendan var en uppdatering av den pågående litteraturstudien gällande digitalt arbetsengagemang. De preliminära resultaten indikerar att många studier är utförda i Kina och Tyskland men ingen explicit i en svensk kontext. Likaså varierade arbetskontexterna och typen av teknik väldigt stort. IT-sektorn fanns med men varken tåg- eller lantbrukssektorn. Några iakttagelser så här långt är att återkommande resurser som nämns är stöd, autonomi, och meningsfullhet samt att krav kan upplevas utmanande eller hindrande. Givetvis skiljer det sig vad som anses vara resurser eller krav mellan olika former av arbeten. Värt att nämna är att få studier nämner konsekvenser av engagemang vid automatisering eller digitalisering. Vi hade sedan en diskussion där deltagarna reflekterade över hur de tentativa resultaten tog sig uttryck i deras respektive områden.

Sedan presenterades mer specifika resultat och tentativa fynd från de tre sektorerna. Med början i IT-sektorn så presenterades två examensarbeten som har genomförts inom ramen för projektet. Det ena arbetet handlade om hur AI-verktyg påverkar IT-professionellas känsla av flow baserat på analyser av dryga 10-talet intervjuer. Slutsatsen var att AI-verktyg kan underlätta delar av arbetsflödet och stödja repetitiva uppgifter, men att de inte är förknippat med att uppnå flow. Det andra examensarbetet handlade om hur användning och effekter av AI-verktyg på tekno-engagemang hos UX designers baserat på analyser av knappa tio-talet intervjuer. Slutsatsen var att AI-verktygen kan potentiellt förbättra tekno-engagemanget, förutsatt att de överensstämmer med UX-designernas värderingar och arbetsflöden. AI-teknikens roll i kreativa branscher ökar, vilket gör det viktigt att utforska hur verktygen optimalt kan stödja digitalt arbetsengagemang och produktivitet. Sedan presenterades resultaten från en omfattande intervjustudie av professionella aktörer gällande AI i IT-branschen kopplat till verktyg och arbetsdynamik i förändring. Resultaten visar exempelvis att många olika AI-verktyg används, olika motiveringar nämns till varför man använder AI-verktyg. Det finns flera möjligheter och utmaningar kopplat till användningen. Intressant var att ingen upplevde att de fick mindre att göra trots att AI:n utförde några av arbetsuppgifterna. En av slutsatserna är att balansen mellan att dra nytta av AI och att hantera dess utmaningar är avgörande för framtiden.

För tågsektorn pågår intervjuerna för fullt med lokförare och tågplanerare. Några tentativa fynd så här långt indikerar att kärnan i arbetet för trafikplanerare är den kreativa problemlösningen som de upplever som väldigt engagerande. Kärnan i arbetet för lokförare är att framföra tåget, de ser den automatiserande tekniken som ett effektiviserande stöd och upplever inte att tekniken tar över det som engagerar med jobbet. Det framträder en bild av att båda dessa yrkesgrupper är starkt kopplade till tid samt upplever en stor ansvarskänsla och att de har ett helhetsperspektiv på tågtrafiken. Det är viktigt att tekniska implementationer inte underminerar dessa värden. Fler intervjuer genomförs i nuläget och mer konkreta fynd hoppas vi kunna presentera under våren 2025.


För lantbrukssektorn visar de tentativa fynden från gårdsbesöken på gårdar med mjölkrobotar att det finns en stor omsorgskänsla för kor, mjölkningsrobotar och anställda. Arbetsengagemanget verkar främst drivas av en kombination av meningsfullhet och passion för att bruka marken och ta hand om korna samt självständighet och frihet i utförandet av arbetet på gården. Fördelar med mjölkningsrobotar är den ökade automatisering av fysiskt krävande arbetsuppgifter, bättre datainsamling och analys samt ökad precision, självständighet och djurvälfärd. Nackdelar som nämns är exempelvis tekniska problem, begränsad flexibilitet hos roboten samt att kunna samla och analysera data smidigt och enkelt från olika IT-system på gården. Det digitala arbetsengagemanget tar sig uttryck på två nivåer. Dels generellt när lantbrukarna upplever en bättre kontroll för att följa upp sina djur och mjölkproduktionen. Några nämner att de blir sporrade av siffrorna över hur mycket mjölk som korna producerar varje dag. Dels mer konkret, där tekniken förstärker en positiv upplevelse “här och nu” som främjar det digitala arbetsengagemanget. Exempelvis kan man avläsa på skärmen hur mjölkflödet är i en spene i realtid, vilket är mycket svårare att uppfatta utan teknikens stöd. Inom lantbruket sker en snabb utveckling av AI och nya tekniker men upptaget sker inte lika snabbt i lantbruket på grund av höga investeringskostnader. Likaså uppmärksammas behovet att ha dubbla kompetenser, då anställda behöver både ha ett gott ’djuröga’ för att kunna läsa av korna samt ett gott ’robotöga’ för att hantera mjölkningsroboten och de andra IT-systemen vilket sammantaget försvårar rekryteringen till lantbruksnäringen. Vi kommer under senhösten och vintern att genomföra fortsatt datainsamling genom intervjuer av fler lantbrukare och även inkludera växtodling där det finns många tekniska för precisionsodling vilka dock inte används i lika stor grad som mjölkningsrobotar.

Dagens avslutades med en diskussion om hur förändringar av arbetsdynamiken, dvs vem som gör vad i utförandet av arbetsuppgifterna – tekniken eller människan – i respektive sektor och dess påverkan på det digitala arbetsengagemanget.

Det är alltid inspirerande och givande att diskutera våra delresultat med representanter från näringslivet och andra forskare samt få inblick i deras vardag på arbetet. Vi ser fram emot nästa träff våren 2025 som blir på plats här vid Uppsala Universitet.

Jessica Lindblom och Andreas Bergqvist

HTO Coverage: AI for Humanity and Society 2023 and human values

Mid of November in Malmö, WASP-HS held its annual conference on how AI affects our lives as it becomes more and more entwined in our society. The conference consisted of three keynotes and panels on the topics of criticality, norms, and interdisciplinarity. This blog post will recap the conference based on my takeaways regarding how AI affects us and our lives. As a single post, it will be too short to capture everything that was said during the conference, but that was never the intention anyway. If you don’t want to read through this whole thing, my main takeaway was that we should not rely on the past through statistics and data with its biases to solve the problems of AI. Instead, we should, when facing future technology, consider what human values we want to protect and consider how that technology can be designed to help and empower these values.

The first keynote was on criticality and by Shannon Vallor. It discussed metaphors for AI, she argued for the metaphor of mirrors instead of the myth that media might portray it as. We know a lot about how AI work, it is not a mystery. AI is technology that reflects our values and what we put into it. When we look at it and see it as humane, it is because we are looking for our reflection. We are looking for us to be embodied in it and anything it does is built on the distortion of our data. Data that is biased and flawed, mirroring our systematic problems in society and its lack of representation. While this might give of an image of intelligence or empathy, that is just what it is: an image. There is no intelligence or empathy there. Only the prediction of what would appear empathical or intelligent. Vallor likened us to Narcissus, caught in the reflection of ourselves that we so flawedly built into the machine. Any algorithm or machine learning model will be more biased than the data it is built on as it draws towards the norms of biases in the data. We should sort out what our human morals are and take biases into account in any data we use. She is apparently releasing a book on the topic of the AI metaphor and I am at least curious to read it after hearing her keynote. Two of the points that Vallor ends on is that people on the internet usually have a lot to say about AI while knowing very little and that we need new educations which teach what is human-centered so that it does not get lost between the tech that is pushed.

The panel on criticality was held between Airi Lampinen, Amanda Lagerkvist, and Michael Strange. Some of the points that were raised related to that we shouldn’t rush technology, that reductionistic view of a lot of the industry will miss the larger societal problems, novelty is a risk, we should worry about what boxes we are put into, and what human values do we want to preserve from technology? Creating new technology just because we can is not the real reason, it is always done for someone. Who would we rather it was for? Society and humanity perhaps? The panelists argued it would be under the control of the market forces without interventions and stupid choices are made because they looked good at the time.

The second keynote on norms and values was by Sofia Ranchordas who discussed the clash between administrative law, which is about protecting individuals from the state, and digitalization and automation, which builds on statistics that hides individuals categorised into data and groups, and the need to rehumanize its regulation. Digitalization is designed for the tech-savvy man and not the average citizen. But it is not even the average citizen that needs these functions of society the most, it is the extremes and outliers and they are even further from being tech-savvy men. We need to account for these extremes, through human discretion, empathy, vulnerability, and forgiveness. Decision making systems can be fallible, but most won’t have the insight to see it. She ended on that we need to make sure that technology don’t increase the asymmetries of society.

The panel that followed consisted of Martin Berg, Katja De Vries, Katie Winkle, and Martin Ebers. The presentations of the participants raised topics such as why do people think AI is sentient and fall in the trap of antromorphism, that statistics cannot solve AI as they are built on different epistemologies and those that push it want algorythmic bias as they are the winners of the digital market, the practical implications of limits of robots available at the market to use in research, and the issues in how we assess risk in regulation. The following discussion included that law is more important than ethical guidelines for protecting basic rights and that it both too early to regulate and we don’t want technology to cause actual problems before we can regulate it. A big issue is also the question if we are regulating the rule-based systems we have today or the technological future of AI. It is also important to remember that not all research and implementation of AI is problematic, as lot of research into robotics and automation is for a better future.

The final keynote was by Sarah Cook and about the interdisciplinar junction between AI and art. It brought up a lot of different examples of projects in this intersection such as: Ben Hamm’s Catflap, ImageNet Roulette, Ian Cheng’s Bad Corgi, and Robots in Distress to highlight a few. One of the main points in the keynote was shown through Matthew Biederman’s A Generative Adversarial Network; generative AI is ever reliant on human input data as it implodes when endlessly being fed its own data.

The final panel was between Irina Shklovski, Barry Brown, Anne Kaun, and Kivanç Tatar. The discussion raised topics as questioning the need of disciplinarity and how you deal with conflicting epistemologies, the failures of interdisciplinarity as different disciplines rarely acknowledges each other, how do you deal with different expectations of contributions and methods when different fields met, and interdisciplinar work often ends up being social scientific. A lot of, or most work, in HCI tends to end up in some regard to be interdisciplinary. As an example from the author, Uppsala University has two different HCI research groups, one at the faculty of natural sciences and one at the faculty of social sciences, while neither fits perfectly in. The final discussion was on the complexities of how to deal with interdisciplinarity as a PhD student. It was interesting and thought-provoking, as a PhD student in a interdisciplinary field, to hear the panelists and audience members bringing up their personal experiences of such problems. I might get back to the topic in a couple of years when my studies draws to a close to forward the favour and tell others about my experiences so that others can learn from them as well.

Overall, it was an interesting conference highlighting the value of not forgetting what we value in humanity and what human values we want to protect in today’s digital and automated transformation.

HTO Coverage: Led by Machines and differing perspectives.

75% of companies in EU should use AI, big data, or the cloud by 2030. It is one of the targets that the European Commission has declared with their Digital Decade policy programme (European Commission 2023). While a lot of research has been and is currently performed to study AI and work, there is a noticeable gap in the research on the effects of AI and automation on working environment and conditions (Cajander et al. 2022). Our work in the TARA and AROA projects aim to continue to bridge this gap but we are not the only one who currently work to do so. This blog post will act as HTO coverage1 of one such initiative.

Last week, I attended the conference Led by Machines in Stockholm. The conference was the launch of a new international research initiative to study how algorithmic management affect the nature of work and workers experiences. The main focus of the conference were a set of keynotes and panels that covered the need for research on the topic and previous work that had already been done. But, my main takeaway was that it highlighted the different perspectives in play in this domain. The conference brought together trade unionists, policymakers and researchers from different fields which led to that the implications were discussed from macro, meso, as well as micro-perspectives. Coming from human-computer interaction and user centred design, I am used to study the micro-level of how individuals use and are affected by the use of technologies. In contrast to this, most of those I spoke to at the conference worked at the macro-level; e.g. a political science researcher that discussed how policy and regulation around technology is decided on and an economics researcher that studied the impact of AI on changes in the labour market. Others worked with topics at a meso-level; e.g. a trade unionist discussed the effects on social relations and the role of middle management in organisations. The swift adoption of new technology that we stand before can have unforeseen consequence across these different levels. As such, it is great that researchers and other stakeholders interested in questions and problems at different levels can come together and work toward gaining a better understanding of the knowledge gap together.

1) The Swedish word “omvärldsbevakning” (lit. “monitoring of the surrounding world”) is often translated to environmental scanning or business intelligence. Both alternative translations comes with different connotations and implications that does not align clearly with research in general or the topic at hand. In the case that this becomes a returning series of blog posts, I instead refer to it as HTO coverage as it will provide coverage of topics related to Human, Technology, and Organisation that occur outside of our research group.

References:

Cajander, Å., Sandblad, B., Magdalena, S., & Elena, R. (2022). Artificial intelligence, robotisation and the work environment. Swedish Agency for Work Environment Expertise. Retrieved September 29th from https://sawee.se/publications/artificial-intelligence-robotisation-and-the-work-environment/

European Commission. (2023) Europe’s Digital Decade: digital targets for 2030. Retrieved September 29th from https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/europes-digital-decade-digital-targets-2030_sv

The author has no affiliation with the organisations that organised the Led by Machines conference.