Category: English (Page 4 of 7)

Post-Doctoral Opportunity in Exciting New EDU-AI Project

We are thrilled to announce the commencement of a new research project, “Adapting Computing Education for an AI-Driven Future: Empirical Insights and Guidelines for Integrating Generative AI into Curriculum and Practice” (The EDU-AI project), starting April 2024. This venture explores the transformative impact of generative AI technologies, such as GPT-4 and automated code generators, on the IT industry, computing education, and professional skills development.

The project, significant for the Department of Information Technology, aligns with our commitment to addressing challenges and capitalizing on opportunities presented by generative AI in education. Spearheaded by Åsa Cajander and Mats Daniels, the project will be conducted over two years (April 2024 – March 2026), potentially extending into a third year.

Project Overview

The EDU-AI project comprises four work packages, each targeting a unique aspect of the generative AI influence in IT and education:

  1. Understanding Generative AI in the Professional IT Landscape: Investigating the use of generative AI among IT professionals.
  2. Generative AI in Computing Education: Student Perspectives: Examining students’ interaction with and perception of generative AI.
  3. Faculty Adaptation and Teaching Strategies for Generative AI: Assessing how faculty integrate generative AI into their teaching methods.
  4. Synthesis and Recommendations for Competence Development in Computing Education: Creating actionable recommendations based on findings from the first three stages.

The project will collaborate with Auckland University of Technology, Eastern Institute of Technology, and Robert Gordon University, Aberdeen, bringing cross-cultural expertise and perspectives.

Benefits and Application Process

This position offers the opportunity to be at the forefront of AI integration in education, work with leading experts, and publish in top journals and conferences.

Interested candidates should submit their application, including a CV, cover letter, and relevant publications. For more information see: https://www.jobb.uu.se/details/?positionId=676633

Last application date: 2023-12-18

For more information about the project and the role, please refer to the detailed project description or contact Åsa Cajander or Mats Daniels directly.

Project Update: SysTemutvecklingsmetodeR för dIigital Arbetsmiljö (STRIA)

After several years of dedicated research and development, the SysTemutvecklingsmetodeR för dIigital Arbetsmiljö (STRIA) project is coming close. Led by Professor Åsa Cajander, working with Dr Magdalena Stadin and Professor Marta Larusdottir, this project has been a pioneering effort to address the critical issue of digital workplace health and usability in IT systems. The project was funded by AFA.

The Problem
In today’s fast-paced digital landscape, many IT systems fail to support efficient work processes, ultimately contributing to health issues within organizations. Research has highlighted a lack of focus on workplace health in current system development practices. There’s also a shortage of practical methods for incorporating a workplace health perspective into digitalization efforts.

The Mission
The STRIA project aimed to collaborate with IT developers to create effective and practical methods for designing sustainable digital work environments. This endeavor included promoting these methods, developing educational materials, and advocating for their adoption.

The Three Focus Methods
The project focused on three key methodologies:

Contextual Think Aloud Method: This method involves users verbalizing their thought processes while interacting with software, enabling evaluators to gain insights into user thinking.
Vision Seminars: Involving a group of evaluators who individually assess software using predefined heuristics, this method helps identify usability problems.
Contextual Personas Method: Originally introduced by Cooper (2004), this method creates hypothetical archetypes of real users, allowing for more targeted and empathetic system design.

Project Phases
The project followed a structured plan, as outlined in Figure 1, which included:

  1. Understanding Digital Workplace: Assessing challenges related to different IT systems and digital workplaces in healthcare and administrative settings.
  2. Developing System Development Methods: Crafting new methods for system development based on insights from previous phases.
  3. Creating Educational Materials: Developing materials to teach developers how to apply these methods effectively.
  4. Evaluation and Refinement: Testing and refining the methods with IT developers and gathering feedback.
  5. Dissemination of Results: Publishing research findings, articles, and blog posts to share the knowledge with the wider community.

Conclusion
As the STRIA project concludes, it leaves a legacy of knowledge, recommendations, and methodologies for assessing digital workplace aspects. The project’s findings has been shared through academic publications, industry-focused journals, conferences, blogs, and educational programs. Stay tuned for the final report and further updates on this important work.

New publication about the Human Contribution in Handling Unanticipated Events at Work

Railway tracks with a clearly visible overhead wire provide trains with electricity.

One early morning, a freight train got caught in the overhead electrical wire, causing a large traffic disruption which affected all train traffic in the area and resulted in delays and cancelled trains for almost 24 hours. This is what one of our most recent publications is about: the incident, the effects it caused on the traffic flow, and more specifically, how the situation was handled and solved from within the traffic control room.

The train traffic system, like most infrastructures in society, plays an important role in everyday life by facilitating a continuous flow of people and goods. What is unknown to many is the very large and complex organisation of work that lies behind a functioning train traffic system. In the Swedish context, the organisation of train traffic involves numerous stakeholders and one of the main actors is the traffic controllers. Although much less studied than traffic control for aviation, the tasks and responsibilities are very similar—as are the challenges. These challenges are very much characterised by the fact that the control task is done remotely from a centralised control room, and that the traffic controllers are dependent on train drivers and others situated along the railway to act as the ‘eyes and ears’ of the control room. These people, together with advanced technologies, make it possible for the control room to stretch out and reach through time and space, making coordination the core task of traffic control.

The publication reports on a unique case study in which an unexpected real-time incident is described and analysed as the situation unfolds. Most reports on accidents and incidents are conducted in retrospect, but not this case study as I happened to be present in the control room at the particular time when the incident took place.  Accordingly, this paper provides novel insights into how the incident was handled and with the use of participant observations and informal interviews, a rich understanding of the work practices was captured. The analysis resulted in a detailed description of the work in the control room which can be divided into three phases: grasping what has happened and the severity of the incident, handling the incident and the immediate effects it had on the traffic, and finally mitigating the long-term consequences of the incident as these affected the traffic for almost 24 hours.

The unfolding of the incident repeatedly revealed that the workers had to cope with challenges related to time and space and as a way to describe and understand this aspect of the work, we turned to a concept originally used in the agricultural, landscape, and geographical domains, namely the concept of ‘sense of place’. The concept describes a certain meaning and relationship beyond the mere spatial between humans and places. A place is thus conceptualised as a centre of cognitive, affective, or attitudinal meaning. Although not previously applied to control room research, this study shows that ‘sense of place’ is something the workers actively strive to develop and that supports them in handling the situation although they were 150 kilometres away from the situation they were to handle. In future work, we aim to continue to explore how, and to what extent, the ‘sense of place’ concept can aid a deepened understanding of the control room work.

For those interested in the details, you can find the full paper here.

Reference

Cort, R. & Lindblom, J. (2023). Sensing the Breakdown: Managing Complexity at the Railway. Culture and Organisation, DOI: 10.1080/14759551.2023.2266857.

Frontiers in Education Conference 2023

The education conference Frontiers in Education (FIE) 2023 was held in College Station, Texas. It is a quite large conference, and there were many tracks during the three days that the conference was on. College station is situated between Dallas and Houston, and it is a, well, lets say interesting city, and incorporates another apparently a bit older city, Bryant, in the close vicinity. There was not that much time for sight-seeing so it was mainly the road from the hotel to the conference centre that became the major view during the conference.

The conference offered a large number of very interesting presentations and I did in fact not sit through any bad or boring presentations, Before the main conference there was also one day of workshops, as usual so many that it was difficult to chose one of them. I attended one on inclusive mentoring, which was very inspiring as a supervisor/mentor in general . I am of course very happy to find that there were quite a few presentations, work shops and special sessions that dealt with the issue of inclusion of students on various levels.

The special session on “Disabled Students in Engineering” was held by four Ph.D. students, and was very well prepared, rendering lots of inspiration for the teaching. The organizers also shared very good working material, which can be reused, e.g., in course seminars (I have just started a 15 credit course on non-excluding design).

All in all, the conference felt well worth the effort and time spent. It is always a good feeling when you return home and feel inspired, and just long for putting all the experiences at work in your own teaching. I have already added several new ideas to the course, and I think that this will improve the course a lot.

Still, maybe the most inspiring part of the conference was the (positive and constructive) critique I received on my presentation and paper: “New Perspectives on Education and Examination in the Age of Artificial Intelligence,” which I almost wanted to retitle as “Old perspectives…” since it looks back at older forms of examination, where there was a closer connection between teacher and student. This closer connection and the way it is achieved does make it more difficult for the student to “cheat” or use the AI chat bots.

The picture shows An old Greek teacher and his student, probably discussing a difficult problem during the examination.
An old Greek teacher and his student discussing some interesting problem during the examination

This post is already long enough, so I will not present the paper in any more detail here, but should you want to have a copy of the paper, please contact me with an email. You are also free to comment/criticize this post in the comment section below.

It’s AI, but Maybe not What you Think!

Note: This is a long article, written from a very personal take on Artificial Intelligence.

The current hype word seems to be “Artificial Intelligence” or in its short form “AI. If one is to believe what people say, AI is now everywhere, threatening everything from artists to industrial workers. There is even the (in)famous letter, written by some “experts” in the field, calling for an immediate halt to the development of new and better AI systems. But nothing really happened after that, and now the DANGER is apparently hovering over us all. Or is it?

Hint: Yes, it is, but also not in the way we might think!

The term “Artificial Intelligence” has recently been watered out in media and advertisements, so that the words almost don’t mean anything anymore. Despite this, the common ideas seem to be that we should 1) either be very, very afraid, or 2) hurriedly adapt to the new technology (AI) as fast as possible. But why should we be afraid at all, and for what? When asked, people often reply that AI will replace everybody at work, or that evil AI will take over anything from governments to the world as a whole. This latter is of course also a common theme for science fiction books and movies. Still, neither of these are really good reasons to fear the current development. But in order to understand why this is so, we need to go back to the historical roots of Artificial Intelligence.

What do we Mean by AI, then?

Artificial Intelligence started as a discipline in 1956 during a workshop at Dartmouth College, USA. During the discipline development, a distinction between two directions formed, that between strong and weak AI. Strong AI aims at replicating a human type of intelligence, whereas weak AI aimed at develop computations methods or algorithms that made use of ideas gained from Human Intelligence (often for specific areas of computation). Neural networks are, for example, representative of the weak AI directions. Today, strong AI is also referred to as AGI (Artificial General Intelligence), meaning a non-specialized artificial agent.

But in the 1950:s and 1960:s computers were neither as fast, nor as large as the current computers, which at that time imposed severe limitations on what you were able to do within the field. A large amount of work was theoretical, but there were some interesting implementations, such as Eliza, SHRDLU and not least Conceptual Dependencies (I have chosen these three examples carefully, since each of these poses some interesting properties with respect to AI, and I will explain this in the following and then follow up on the introduction):

Conceptual Dependencies : Conceptual Dependencies is an example of a very successful implementation of an artificial system with a very interesting take on knowledge representation. The system was written in the programming language LISP, and attempted to extract the essential knowledge that hid in the texts. The result was a conceptual dependency network, that could then be used successfully to summarize news articles on certain topics (the examples were taken from natural disasters and airplane hijacking). There were also attempts to make the system produce small (children’s) stories. All in all, the problem was that the computers were too small to be practically useful.

SHRDLU : SHRDLU was a virtual system that worked by having a small robot manipulating geometric shapes in a 3D modelling world. It was able to reason about the different possible or impossible moves, for example, that it is not possible to put a cube on top of a pyramid, but it is OK to do the reverse. The problem with SHRDLU was that there were some bugs in the representation and the reasoning, which ended in that it was pointed out that the examples shown were most likely preselected and did not display any general reasoning capabilities.

Eliza : The early chatbot Eliza is probably most known as the “Computer Psychologist”. It was able to keep up a conversation with a human for some time, pretending to be a Psychologist and it did so well that it was enough to actually convince some people that it was a real Psychologist behind the screen. “But, hold it!” someone may say here, “Eliza was not a real artificial intelligence! It was a complete fake!”. And yes, you would be perfectly right here. Eliza was a fraud, not so that it wasn’t a computer program, but in that it was faking the “understanding” of what the user wrote. But this is exactly the point with mentioning Eliza here. A intelligence-like behaviour may fool many, even though it does not have any “intelligent system” under the hood.

What can we Learn from AI History?

The properties of these three historical systems that I would like to point to in more detail are as follows:

  • Conceptual dependencies: AI needs some kind of knowledge representation. At least some basic knowledge must be stored in some way as a basis for the interpretation of the prompts.
  • SHRDLU : An artificial agent needs to be able to do some reasoning about this knowledge. Knowledge representation is only useful if it possible to use it for reasoning and possible generation of new data.
  • Eliza : Not all AI-like systems are to be considered to be real Artificial Intelligence. In fact Joseph Weizenbaum created Eliza in order to prove exactly how easy it was to emulate some “intelligent behaviour”.

To start with these three examples also have one interesting common property, namely that they are transparent since the theory and implementations have been made public. This is actually a major problem with many of the current generative AI agents, since they are based on large amounts of data, the source listings of which are not publicly available.

The three examples above also point to additional problems with the generative modelling approaches to AI (those that are currently considered so dangerous). In order to become an AGI (artificial general intelligence) it is most likely that there needs to be some kind of knowledge base, and an ability to reason about this knowledge. We could in fact regard the large generative AI agents as very advanced versions of Eliza, in some cases also enhanced with search abilities in order to give better answers, but as a matter of fact they don’t really produce “new” knowledge, just phrases that are the most probable continuations of the texts in the prompts. Considering the complexity of languages this is in itself no small feat, but it is definitely not a form of intelligent reasoning.

The similarity to Eliza is increased by the way the answers are given to a person, in that they are given in a very friendly form, even having the system apologize when it is pointed out that the answer it has given is not correct. This conversational style of interaction can easily fool users who are less knowledgeable about computers that there is a genie in the system, which is very intelligent and (very close to) a know-it-all. More about this problem later in this post.

Capabilities of and Risks with Generative AI?

The main problem that has arisen is that the generative AI systems cannot produce any real novelties, since the answers are based on (in the best case, extrapolation of) existing texts (or pictures). Should they by any chance produce new knowledge, there is no way to know whether this “new knowledge” is correct or not! And here is where, in my opinion, the real danger with generative AI lies. If we ask for information we either get correct “old” information, or new information which we cannot know whether it is correct or not. And we are only given one single answer per question. In this sense the chatbots could be compared to the first version of Google search, which contained a button marked “I’m feeling lucky!”, an option which just gave you one single answer, and not as now hundreds of pages to look through.

Googles search page with the “I’m feeling lucky!”-button, which has now been removed.

The chatbots also provide single answers (longer of course), but in Eliza manner wrapped in a conversational style that is supposed to convince the user that the answer is correct. Fortunately (or not?), the answers are often quite correct, so they will in most cases provide both good and useful information. However, all results still need to be “proof-read” in order to guarantee the validity of the contents. Thus, the users will have to apply critical thinking and reasoning to a high extent when using the results. Paradoxically, the better the systems become, i.e., the more of the results that are correct, the more we need to check the results in detail, especially when critical documents are to be produced where an error may have large consequences.

Impressive systems for sure!

Need we be worried about the AI development, then? No, today there is no real reason to worry about the development as such, but we need to be more concerned about the common usage of the results from AI systems. It is necessary to make sure that the users do understand that chatbots are not intelligent in the sense we normally think. They are good at (re-)producing text (and images), which most of the time make them very useful supportive tools for writers and programmers, for example. Using the bots to create text that can be used as a base for writing a document or an article is one interesting example of where this kind of systems will prove to be very important in the future. It will still be quite some time before they will be able to write an exiting and interesting novel, without the input and revision of a human author. Likewise, I would be very hesitant about using a chatbot to write a research article, or even worse, a text book in any topic. These latter usages will definitely require a significant amount or proof-reading, fact-checking and not least rewriting, before being released.

The AI systems that are under debate now are still very impressive creations, of course, and they manifest a significant progress in the engineering of software. The development of these systems is remarkable, and they have a potential to become very important in society, but they do not really produce really intelligent behaviour. The systems are very good statistical language generators, but with a very, very advanced form of Eliza at the controls

The future?

Will there be AGI, or strong AI beings in the future? Yes, undoubtedly, but this will take a long time still (I am prepared to be the laughing stock in five years, if they arrive. And these systems will most likely be integrated with the generative AI we have to day for the language management. Still, we will most likely not be able to get there, as long as we forget about the use of some kind of knowledge network underneath. It might not be in the classic form as mentioned above, but it seems that knowledge and reasoning strategies, rather than statistical models has to form some kind of underlying technology.

How probable is this different development path leading to strong AI, or AGI systems? Personally, I think it is quite probable and it seems to be doable, but I am also very curious about how the development will proceed over time. I would be extremely happy if an AGI could be born in my life time (being 61 at the time of writing).

And hopefully these new beings will be in the shape of benevolent, very intelligent agents, that can cooperate with humans in a constructive way. I still have the hope. Please feel free to add your thought in the comments below.

Coffee break again?! Are you lazy or productive?

Are you feeling guilty after been socialising with co-workers over a coffee for too long? No worries, micro-breaks are actually beneficial for job performance as well as employee well-being. In addition, the length of the break is associated with the productivity when you return to your job duties, and the longer breaks – the better, so to speak.

So, why is it like this? Well, internal (during work) and external (after work) recovery are essential in order to balance the job demands during the day. Simply explained, through recovery, the body and the brain will have the opportunity to mobilise for upcoming challenges. During the working day, it’s great to take micro-break for internal recovery. A micro break can involve a coffee in the staff room, talking a walk, stretching, or day dreaming.

Research has shown that micro-breaks can help reduce fatigue and increase energy levels, leading to improved well-being. They can also facilitate psychological detachment and relaxation, which can help reduce the impact of high job demands. Additionally, taking micro-breaks can also prevent the impairing effects of accumulated strain, leading to improved job performance. They can also be helpful to reload mental resources, leading to better cognitive performance. Which is of highly relevance for us Academians!

So, the next time you’re feeling overwhelmed at work, remember to take a micro-break and engage in some internal recovery activities. Your well-being and job performance will thank you. So how do mine micro-breaks look like today? Well, mostly I daydream back to the splendid ‘Moulin Rouge! The Musical’, that I saw a couple of days ago.

In case you are interested to read more about micro-breaks the importance of recovery, here are some literature suggestions:

Citations:

Albulescu, P., Macsinga, I., Rusu, A., Sulea, C., Bodnaru, A., & Tulbure, B. T. (2022). ‘Give me a break!’ A systematic review and meta-analysis on the efficacy of micro-breaks for increasing well-being and performance. PLoS ONE, 17(8 August). https://doi.org/10.1371/journal.pone.0272460

Demerouti, E., Bakker, A. B., Geurts, S. A. E., & Taris, T. W. (2009). Daily recovery from work-related effort during non-work time. Research in Occupational Stress and Well Being, 7, 85–123. https://doi.org/10.1108/S1479-3555(2009)0000007006

Best regards

Magdalena Ramstedt Stadin, PhD

Hybrid Work: Software Engineering Students’ Perspective

Hybrid work models have become the new norm in the wake of the COVID-19 pandemic, reshaping how we approach productivity and collaboration. As hybrid work models continue to shape the modern workforce, understanding their impact on productivity is paramount. 

Hybrid work models combine remote and in-person work, creating a unique set of challenges, particularly in software development projects. In software development, where the demand for faster, higher-quality output is constant, finding solutions within resource constraints becomes imperative.

Virtual group meeting

“Productivity Paranoia” in Hybrid Work

The impact of hybrid work on productivity has garnered attention in various sectors. In a recent survey conducted by Microsoft involving 20,006 global knowledge workers, a concerning trend emerged. Managers and team leaders expressed “productivity paranoia” with only a mere 12% of respondents indicating full confidence in their team’s productivity within hybrid work settings. This phenomenon underscores the need for a closer examination of productivity in hybrid work scenarios, including those within software engineering education.

The Significance for Software Engineering Students

For software engineering students, productivity plays an important role in their academic and future professional success. While productivity has been studied extensively in various contexts, there’s a gap in research specific to software engineering education. To address this, our study [1], conducted in Portugal and Sweden examined the perspectives of seventy-seven software engineering university students. Most of these students, having experienced hybrid work, expressed a preference for continuing this mode of work in future group projects.

Since the majority of software engineering students favor the hybrid work model, our intention is to conduct a more in-depth study. We aim to delve deeper into understanding the specific factors that contribute to perceived productivity in hybrid work environments. This research will explore ways to optimise productivity in this setting, helping students and organisations make the most of the hybrid work mode.

[1] S. Ouhbi and N. Pombo. “Hybrid work provides the best of both worlds” Software Engineering Students’ Perception of Group Work in Different Work Settings. In the 26th International Conference on Interactive Collaborative Learning, ICL 2023, pp 1943-1954.

HTO Coverage: Led by Machines and differing perspectives.

75% of companies in EU should use AI, big data, or the cloud by 2030. It is one of the targets that the European Commission has declared with their Digital Decade policy programme (European Commission 2023). While a lot of research has been and is currently performed to study AI and work, there is a noticeable gap in the research on the effects of AI and automation on working environment and conditions (Cajander et al. 2022). Our work in the TARA and AROA projects aim to continue to bridge this gap but we are not the only one who currently work to do so. This blog post will act as HTO coverage1 of one such initiative.

Last week, I attended the conference Led by Machines in Stockholm. The conference was the launch of a new international research initiative to study how algorithmic management affect the nature of work and workers experiences. The main focus of the conference were a set of keynotes and panels that covered the need for research on the topic and previous work that had already been done. But, my main takeaway was that it highlighted the different perspectives in play in this domain. The conference brought together trade unionists, policymakers and researchers from different fields which led to that the implications were discussed from macro, meso, as well as micro-perspectives. Coming from human-computer interaction and user centred design, I am used to study the micro-level of how individuals use and are affected by the use of technologies. In contrast to this, most of those I spoke to at the conference worked at the macro-level; e.g. a political science researcher that discussed how policy and regulation around technology is decided on and an economics researcher that studied the impact of AI on changes in the labour market. Others worked with topics at a meso-level; e.g. a trade unionist discussed the effects on social relations and the role of middle management in organisations. The swift adoption of new technology that we stand before can have unforeseen consequence across these different levels. As such, it is great that researchers and other stakeholders interested in questions and problems at different levels can come together and work toward gaining a better understanding of the knowledge gap together.

1) The Swedish word “omvärldsbevakning” (lit. “monitoring of the surrounding world”) is often translated to environmental scanning or business intelligence. Both alternative translations comes with different connotations and implications that does not align clearly with research in general or the topic at hand. In the case that this becomes a returning series of blog posts, I instead refer to it as HTO coverage as it will provide coverage of topics related to Human, Technology, and Organisation that occur outside of our research group.

References:

Cajander, Å., Sandblad, B., Magdalena, S., & Elena, R. (2022). Artificial intelligence, robotisation and the work environment. Swedish Agency for Work Environment Expertise. Retrieved September 29th from https://sawee.se/publications/artificial-intelligence-robotisation-and-the-work-environment/

European Commission. (2023) Europe’s Digital Decade: digital targets for 2030. Retrieved September 29th from https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/europes-digital-decade-digital-targets-2030_sv

The author has no affiliation with the organisations that organised the Led by Machines conference.

Supporting Informal Caregivers of Head and Neck Cancer Patients: Understanding Their Challenges and Needs

Caring for a loved one with cancer is a deeply personal and emotionally challenging journey. For many, it’s a labour of love with a profound sense of purpose and satisfaction. However, the role of informal caregivers (ICs) is not without its unique set of challenges. In this blog post, we delve into a research paper exploring the world of informal caregivers, particularly those who support individuals battling head and neck cancer (HNC).

Paper: Langegård, U., Cajander, Å., Ahmad, A., Carlsson, M., Nevo, E. O., Johansson, B., & Ehrsson, Y. T. (2023). Understanding the challenges and need for support of informal caregivers to individuals with head and neck cancer-A basis for developing internet-based support. European Journal of Oncology Nursing, 102347.

The Struggles of Caregiving for HNC Patients

Head and neck cancer presents a unique set of challenges due to its impact on essential functions like swallowing, speaking, and breathing. Patients often undergo treatments that result in various distressing symptoms, such as dry mouth, altered facial appearance, and debilitating pain. This increased dependence on caregivers adds an extra layer of responsibility.

Studies have highlighted the significant caregiver burden experienced by ICs of HNC patients. Caregiver burden refers to the multidimensional physical, psychological, and social challenges caregivers face when caring for their loved ones. Depression, fatigue, and sleep disturbances are common among ICs in this context, emphasizing the need for support.

Preparedness for caregiving refers to an IC’s perceived ability to provide physical, emotional, or practical care while managing the associated stresses. Research has shown that well-prepared caregivers experience fewer worries and are more capable of providing care.

The Research Study and Its Objectives

The study we’re discussing is part of a broader research project aimed at developing online support for ICs of individuals with HNC. The project is called Carer eSupport and is presented in this blog post. It’s a collaborative effort involving expert caregivers, medical professionals, and human-computer interaction experts. The project focuses on addressing the unique needs of ICs to enhance their preparedness for caregiving.

Exploring ICs’ Challenges and Needs

The qualitative study conducted for this research employed thematic analysis to gain insights into the challenges and needs of ICs supporting individuals with HNC. The study involved both focus group discussions and individual interviews.

The findings revealed that being an IC for HNC patients is a multifaceted experience. ICs often felt excluded from the care process due to a lack of information about their loved one’s health status. This left them feeling unprepared and disconnected.

The impact of caregiving on daily life was significant. ICs had to adapt their routines and sometimes even sacrifice their social lives and work commitments. This shift in priorities could lead to isolation and emotional strain.

Carrying the uncertainty of the cancer journey was another emotional burden for ICs. Waiting for diagnoses or witnessing treatment’s effects generated fear and anxiety about the future.

The research also highlighted the transformation of the IC’s role and the dynamics of the caregiver-patient relationship. ICs often transitioned from being partners or family members to full-time caregivers. This shift could strain the relationship and create vulnerability.

Feeling forced into the caregiver role and dealing with practical responsibilities, such as wound care, added to the emotional burden. ICs frequently felt ill-equipped to handle these responsibilities.

Additionally, caregiving often led to a loss of the IC’s own identity as they became consumed by their relative’s needs. This loss of identity could also be linked to changes in the patient’s personality due to pain or treatment side effects.

The study also explored the sources of support for ICs. A strong social network that provided practical and emotional support was invaluable. This included understanding employers who allowed flexibility, friends and family members who offered assistance and even support from healthcare professionals.

However, not all ICs were fortunate enough to have this support network. Some felt isolated and struggled to ask for help or define their needs. The research emphasized the importance of both emotional and informational support, including education about practical aspects of care.

Conclusion

Understanding the challenges and needs of informal caregivers supporting individuals with head and neck cancer is a critical step toward providing them with the necessary support. This research sheds light on the emotional and practical hurdles these caregivers face and underscores the importance of preparedness for caregiving.

As the healthcare community continues developing interventions and support systems, it’s essential to consider the insights gained from studies like this. By addressing the specific needs and challenges of ICs, we can enhance their ability to provide the best possible care to their loved ones while safeguarding their own well-being.

On the responsibility of putting on a show

Taking the stage for the first time as a PhD-student.

It’s been a mere three weeks since I started my PhD position in Uppsala and I’m in Swansea, Wales. The occasion is the conference ECCE (short for European Conference on Cognitive Ergonomics). Oscar Bjurling at RISE (https://www.ri.se/en/person/oscar-bjurling) and I got a paper accepted based on a project we did last year, when I was in the Cognitive Science masters program at Linköping University. “Human-Swarm Interaction in Semi-voluntary Search and Rescue Operations: Opportunities and Challenges” is what we’ve named our paper, and it’s a workshop-based study where we had discussions with experts about potential consequences of drone swarm implementation on search and rescue operations.

Having a paper accepted is all well and good, but it should also be presented. Being that this will be my first conference, I don’t really have a clue about the amount of people who will attend each presentation. I feel like it could either be a full stacked audience and bouquets of roses being handed out to every speaker, or just the one half-sleeping audience member glaring disapprovingly at every one of my attempts at arguing for seeing drone swarms as valuable search and rescue team members. With us being 11th in a line of 15 15-minute presentations the opening day, there is a definite risk that the eventual flowers will be saved for the keynote speakers.

Nevertheless, a presentation is due, and I think that we as researchers have a responsibility to make sure that the ones who do show up to see our presentation feels like it was worth it. Because if there’s one thing I’ve learned during my brief time as a university-employee is that there’s always something else you could be doing. There will definitely be people there who are stressed about grading papers, writing ethics applications, or other potentially more important stuff than watching our presentation. Now I don’t plan to completely take after the late Hans Rosling and pick up the noble art of sword swallowing for this presentation, partly because of time issues, but also because I couldn’t see the “It [The Sword] is for scientific purposes”-argument going all too well at the security check-in at the airport. However my ambition is to convince at least somebody in the audience that looking into the potential of drone swarms might be a good idea.

Similar thoughts of presentation responsibility struck me when I, in the role of teacher assistant, presented a couple of ethical issues at a seminar last week. Not only could the students probably learn more about the Trolley Problem on Youtube than from me, but I’m actually standing there claiming to know about this subject to the degree that I could be teaching it to university students.

So when preparing for this presentation, I’m being meticulous about representing the thoughts of the experts we talked to correctly, so that I can confidently argue for our analyses and conclusions, while at the same time taking the responsibility of putting on a show seriously. Because if I don’t bother, why should the audience?

« Older posts Newer posts »