Category: English (Page 10 of 10)

Guest blog: Recommendations for the Development of Connected Health Software

Particularly as we move forward following the recent COVID-19 pandemic, there has been an increase in the use of software in healthcare systems to support healthcare management and prevention. In Ireland, for example, there has been an increase in on-line consultations with General Practitioners (GPs)/ Family Physicians. This has resulted in the submission of prescriptions directly to pharmacies, where the patient can collect their medication. This minimises human contact, which was important during the pandemic, minimises travel for patients who may have difficulty getting to the doctor’s surgery, and makes it easier for GP surgeries to cater for patients over a wider area. There is potential for such systems to expand and become more pervasive, particularly as we are seeing a decrease in the number of medical practices in rural areas and an increase in population nationally.  Those with health conditions can potentially use software to monitor their physiological measures, allowing the doctor to make decisions about their care in a different manner – thus software development and support must become more efficient and effective.

Healthcare software for use by individual patients is increasingly coming in the form of apps on the smartphone – therefore, the needs of particular cohorts need to be accounted for. In our research in Lero – the Science Foundation Ireland Software Research Centre, we have developed fundamental requirements for the development of software for use by Older Adults and for Persons with Mild Intellectual and Developmental Disability. Why these cohorts? We know that the number of Older Adults is increasing globally and that this is putting pressure on healthcare systems, and so it is important for software developers to take their fundamental requirements into account. Persons with Mild Intellectual and Developmental Disability have specific requirements, and there is evidence that the lack of accessibility and usability for this cohort is causing the digital divide to increase. Of course, we can consider other cohorts! For example, what about nursing staff whose primary aim is to care for the patient, do they need to be trained in system use, or can software developers consider fundamental requirements for them to ensure that they can use systems efficiently and effectively? We believe that if software developers know these fundamental requirements, which we present in the form of ‘recommendations’ for the software developer, then healthcare software developed with be ultimately ‘easier to use’ by those who really need to use it! Each recommendation developed is supported by the detail obtained through literature review, standards and regulations review, focus groups, observation, prototype review, interviews, surveys and analysis of app store comments.

In our research we have developed 44 recommendations for the development of software for Older Adults, categorised into 28 Usability and 16 Accessibility requirements, 6 of which are shown in Figure 1.

Figure 1: Six recommendations which can be used in the development of software for Older Adults

We have also developed 46 recommendations for the development of software for Persons with Mild Intellectual and Developmental Disability, categorised into 20 Usability, 16 Accessibility, 3 Content and 10 Gamification requirements, 6 of which are shown in Figure 2. Interestingly, in our qualitative research with persons from this cohort, we observed their ability to use games as a means to find out and understand information. We investigated this further, which is why we have included gamification factors.  

Figure 2: Six recommendations which can be used in the development of software for Persons with Mild Intellectual and Developmental Disability.

The full set of recommendations and relevant information is provided in two Lero technical reports which are publicly available at 2023_TR_02_Recommendations_MildIDD.pdf, and 2021_TR02_Design_Patterns_ReDEAP.pdf. We encourage healthcare software developers to consider and use these when developing healthcare software.

This is a guest blog post by Prof Ita Richardson who visited us in March 2023. Professor Ita Richardson comes from the Department of Computer Science and Information Systems, Lero – the Science Foundation Research Centre for Software and Health Research Institute/Ageing Research Centre, University of Limerick, Ireland

Relevant publications:

Leamy, Craig, Bilal Ahmad, Sarah Beecham, Ita Richardson and Katie Crowley, Launcher50+ : An Android Launcher for use by Older Adults, In Proceedings of the 16th International Joint Conference on Biomedical Engineering Systems and Technologies – HEALTHINF, 2023.

Bilal Ahmad, Ita Richardson and Sarah Beecham, Usability Recommendations for Designers of Smartphone Applications for Older Adults: An Empirical Study, in Software Usability, edited by Castro, L & Cabrero, D & Heimgärtner, R, InchtechOpen, DOI: 10.5772/intechopen.96775, ISBN 978-1-83968-967-3

Ahmad, Bilal, Sarah Beecham, Ita Richardson, The case of Golden Jubilants: using a prototype to support healthcare technology research, Workshop on Software Engineering & Healthcare, co-located with International Conference on Software Engineering, 2021, 24th May, 2021.

Alshammari, Muneef, Owen Doody and Ita Richardson, 2020, August. Software Engineering Issues: An exploratory study into the development of Health Information Systems for people with Mild Intellectual and Developmental Disability. In 2020 IEEE First International Workshop on Requirements Engineering for Well-Being, Aging, and Health (REWBAH) (pp. 67-76). IEEE, 31st August.

Ahmad, Bilal, Ita Richardson and Sarah Beecham, 2020. A Multi-method Approach for Requirements Elicitation for the Design and Development of Smartphone Applications for Older Adults. In 2020 IEEE First International Workshop on Requirements Engineering for Well-Being, Aging, and Health (REWBAH) (pp. 25-34). IEEE, 31st August.

Alshammari, Muneef, Owen Doody and Ita Richardson (2020). Health Information Systems for Clients with Mild Intellectual and Developmental Disability: A Framework, Proceedings of the 13th International Joint Conference on Biomedical Engineering Systems and Technologies Volume 5: HEALTHINF, 24-26 February 2020, Valletta, Malta pp 125-132 ISBN: 978-989-758-398-

Ahmad, B., Richardson, I., McLoughlin, S. and Beecham, S., 2018, July. Assessing the level of adoption of a social network system for older adults. In Proceedings of the 32nd International BCS Human Computer Interaction Conference 32 (pp. 1-5)

Alshammari, Muneef, Owen Doody and Ita Richardson (2018). Barriers to the Access and use of Health Information by Individuals with Intellectual and Developmental Disability IDD: A Review of the Literature. IEEE 6th International Conference on Healthcare Informatics (ICHI), pp. 294-298, New York, USA, 4-7th June, DOI:10.1109/ICHI.2018.00040

Ahmad Bilal, Richardson Ita, Beecham Sarah (2017). A Systematic Literature Review of Social Network Systems for Older Adults. In: Felderer M., Méndez Fernández D., Turhan B., Kalinowski M., Sarro F., Winkler D. (eds) Product-Focused Software Process Improvement. PROFES 2017. Lecture Notes in Computer Science, vol 10611 pp 482-496, Springer, Cham https://doi.org/10.1007/978-3-319-69926-4_38.

Exploring the Effects of Digitalization on Ground Personnel in Airports

New technologies, such as automation, robotics, and artificial intelligence (AI), have brought significant changes to the workplace, and the impact of these changes on workers’ health and safety remains largely unknown. This is especially true in the aviation industry, where ground personnel face a significant technological shift. The COVID-19 pandemic has further complicated the situation, leading to a shortage of personnel and putting the focus on creating attractive and healthy workplaces to retain staff. However, implementing new technologies can also bring new and unforeseen problems where using a hand scanner causes pain and discomfort for airport personnel.

A person holding a digital device for scanning luggage
A person holding a digital device for scanning luggage

To address these knowledge gaps, a research project has been initiated to investigate the impact of new technologies on the work environment of ground personnel in the aviation industry. The project aims to increase knowledge of how new technologies have affected and will continue to affect the work environment of baggage handlers, airport technicians, and refueling staff. These groups have been chosen because of the large research gap in their areas and the shortage of personnel, which increases the importance of their health and safety.

The project will work closely with the aviation industry and the Swedish Transport Workers’ Union (TYA), owned by labor market parties, the Swedish Transport Workers’ Union, and Transport Companies. Both organizations have expressed a need for this knowledge to prevent workplace health and safety problems. The project will also examine how research results on the work environment can be implemented and utilized effectively.

The project has two main objectives. The first is to investigate the level of implementation of new technologies that affect baggage handlers, airport technicians, and refueling staff across Sweden’s 38 airports and the plans for digitization moving forward. This will provide a baseline for the second objective, examining how new technologies have affected the work environment and increased the risk of workplace injuries.

The project addresses a critical knowledge gap in the area of new technologies and work environment and is well-aligned with the goals of the AFA (Arbetsmiljöfonden) within this area. The project will provide increased knowledge of the effects of new technologies on airport personnel’s work environment and support labor market parties in working preventatively regarding future workplace health and safety risks.

Work Engagement in the Era of AI: Opportunities and Challenges

A robot reaching for your hand.

“Have you ever felt like a robot at work? Well, with the rise of AI and automation, this feeling might become even more common. That’s why we’re launching a research project to investigate how AI@work affects work engagement.”

chatGPT

The words above are how the renowned AI tool chatGPT summarises our new research project which aims at investigating how increased AI/automated-supported work influence work engagement. Alarming reports show that negative work-related emotions are climbing and we ask if increased AI and automation has a role in this.

The use of robots, automation, and AI in the workplace has become increasingly common in recent years. While there has been a significant interest in the technical development, the impact of these technologies on work engagement is not well understood. Similarly, we see multiple theories on work engagement in psychological and organisational research, but these say very little about the digital aspects of a workplace. How may technology affect a sustainable and productive state where employees are present and truly engaged in their work tasks? In our newly initiated research project, we intend to address this knowledge gap.

With a human-centred approach and research methods such as ethnographic field studies and interviews, we will conduct studies across three different sectors: the IT sector, the agricultural sector, and the metallic industry sector. These three sectors have been selected to ensure a broad context for our research. At a first glance, the sectors might seem to have nothing in common but they are all currently exposed to automation and AI, although the technology has completely different purposes across the sectors. For example, the use of AI tools in programming, or automated milking systems used in the agricultural sector.

What we learn from studying AI and automated-supported work and its influence on work engagement in different sectors will be used to develop a theoretical framework. This framework aims to be a useful tool for organisations to embrace opportunities while mitigating risks related to increasingly digital work environments and work engagement. Accordingly, we intend for the project to have both scientific, practical, and societal impact and look forward to continuing blogging about updates as the project progresses.

The real dangers of current “AI”…

Can we believe what we see or read in the future? This polar bear walking in the desert does not exist, but can still affect our thoughts. (Picture: Oestreicher and MidJourney).

Recently there has been an open letter from a number of AI-experts advocating a pause in the development of new AI agents. The main reason for this is the very rapid development of chatbots based on generative networks, e.g., chatGPT and Brad, and a large number of competitors still in the starting blocks. These systems are now also publicly available at a fairly reasonable cost. The essence in the letter is that the current development is too fast for society (and humanity) to cope with it. This is of course an important statement, although we already have the social media, which when used in the wrong way has a serious impact on people in general (such as promoting absurd norms of beauty, or dangerous medical advice spreading in various groups).

The generative AI systems that are under discussion in the letter will undoubtedly have an impact on society, and we are definitely also taken by surprise in many realms already. Discussions are already here on how to “prevent students from cheating on their examinations by using chatGPT (see my earlier post about this here). The problem in that case is not the cheating, but that we teach in a way that makes it possible to cheat with these new tools. To prohibit the use is definitely not the right way to go.

The same holds for the dangers pointed to by the signers of the public letter mentioned above. A simple voluntary pausing of the development will not solve the problem at all. The systems are already here and being used. We will need to see other solutions to these dangers, and most important of all, we will need to study what these dangers really are. From my perspective the dangers have nothing to do with the singularity, or with the AI taking over the world, as some researchers claim. No, I can see at least two types of dangers, one immediate, and one that will/may appear within a few years or a decade.

Fact or fiction?

Did this chipmunk really exist? Well, in Narnia, he was a rat, named Ripipip (Picture: Oestreicher and MidJourney).

The generative AI systems are based on an advanced (basically statistical) analysis of a large number of data, either texts (as in chatBots), or pictures, as in AI art generators, (such as DALL-E or MidJourney). The output from the systems has to be generated with this data as a primary (or only) source. This means that the output will not be anything essentially new, but even more problematic, the models which are the kernel of the systems are completely non-transparent. Even if it is possible to detect some patterns in the in- and output sequences, it is quite safe to say that no human will understand the models themselves.

Furthermore, the actual text collections (or image bases, but I will leave these systems aside for a coming post) on which the systems are based, are not available to the public, which causes the first problem. We, as users, don’t know what the source of a certain detail of the result is based on, whether it is a scientific text or a purely fictitious description in a sci-fi novel. Any text generated by the chatBot needs to be thoroughly scanned with a critical mind, in order not to accept things that are not accurate (or even straightforwardly wrong). Even more problematic is that these errors are not the ones that may be simple to detect. In the words of chatGPT itself:

GPT distinguishes between real and fictitious facts by relying on the patterns and context it has learned during its training. It uses the knowledge it has acquired from the training data to infer whether a statement is likely to be factual or fictional. However, the model’s ability to differentiate between real and fictitious facts is not perfect and depends on the quality and comprehensiveness of the training data.

chatGPT 3.5

And the training data we know very little about. The solution to this problem is most of the time addressed as “wait for the next generation”. The problem here is that the next generation of models will not be more transparent, rather the opposite.

So, how is the ordinary user, who is not an expert in a field, supposed to be able to know whether the answers they get are correct or incorrect? For example, I had chatGPT producing two different texts; one giving the arguments that would prove God’s existence, and one that gave the arguments that would prove that God does not exist. Both versions were very much to the point, but what should we make of it? Today, when there are many topics that are the subjects of heated debates, such as the climate crisis, the necessity of vaccinations, etc., this “objectivity” could be very dangerous if it is not used with a fair amount of critical thinking.

Recursion into absurdity – or old stuff in new containers?

Infinite recursion inspired by M.C. Esher. (Picture: Oestreicher and MidJourney).

As mentioned above, the models are based on large amounts of texts, so far mostly produced by humans. However, today there is a large pool of productivity enhancers that provide AI support for the production of everything, from summaries to complete articles or book chapters. It is quite reasonable to assume that more and more people will start using these services for their own private creations, as well as, hopefully with some caution as per the first problem above, in the professional sphere. We can assume that when there is a tool, people will start using it.

Now, as more and more generated texts will appear on the public scene, it will undoubtedly mix in with the human-created text masses. Since the material for the chatBots needs to be updated regularly in order to keep up with the developments in the world, the generated texts will also slowly but steadily make their way into the materials and in the long run be recycled as new texts adding to the information content. The knowledge produced by the chatBots will be more and more based on the generated texts, and my fear is that this will be a very rapidly accelerating phenomenon that may greatly affect the forthcoming results. In the long run, we may not know whether a certain knowledge is created by humans or by chatbots that generate the new knowledge from the things we already know.

This recursive loop of traversing the human knowledge base mixed with the results from the generative AI-systems may not be as bad as might be considered, but it might also lead to a large amount of absurdity being produced as being factually correct knowledge. In the best case, we can be sure that most of the generated texts in the future will consist of old stuff being repackaged into new packages.

Conclusions

What can be seen through the knowledge lens of the chatbots that are emerging (Picture: Oestreicher and MidJourney).

So, what are my conclusions from this? Should we freeze the development of these systems, as proposed in the open letter? We could, but I do not think that this will solve any problems. We have opened the box of Pandora, and the genie is already out of his bottle. In my perspective, the issue is more on learning how to use this knowledge in order to have it work for us in the best way. Prohibitions and legal barriers have never proved to stop people from doing things. The solution is instead the promotion of knowledge, not least to the main sources of education, and I do not just mean the schools and universities, but journalists and writers in general, as well as people who will be using this.

Already with social media, the problem with “fake news” and “fake science” has been a big problem, but as long as people will regard information from external sources (such as social media, Google searches, facebook or reddit groups, and now chatBots) as truths, and swallow the information from these as plain truths, we can pause the development of GPTs as much as we like and the problem will not go away. We started on this path already with the fast developments of social media, and it will not go away just because we cover our eyes.

So, I urge you as a reader to read this article with a critical mind, and don’t just believe everything that is written here. You know, I just might be completely wrong about this, myself.

AI and How Education Needs to Change

But will the new tools really make it possible to cheat that much? Well, if we maintain the old style of teaching and examining, the answer is undoubtedly “yes”. However, we can also see this as a possibility to improve, or even revolutionize both education and examination. This, of course, need some changes to be implemented. I will explain my thoughts a bit more in the following.

When we look at our teaching obligation, we need to pose the question: “What do we want our students to learn?”. Well, knowledge about the topic at hand, of course. But is that really true? In the first run, what do we define as knowledge? In many cases, the things that appear on the exams are questions about details, details that they will be able to google as soon as they get outside of the examination hall. Home exams are slightly better, since the students will have to synthesize the answers to the exam, rather than just look them up. But now you can ask a program like chatGPT to do the synthesis for you. And is that cheating? In our old apperception of examination, of course it is. What has the student done to get the piece of text written? Not very much!

Is the classical teaching doomed? No, but it needs to adapt to the new conditions. (Source: L. Oestreicher)

However, when we look closer at this, we can change the question a little, and see what happens? The new question would be something in the way of: “How could we change the way of teaching and examination so that this kind of helping tool will not be a cheating possibility (but maybe even a learning tool)?”. My answer to this question is to focus on understanding. My favourite meme for teaching is: “You can lead a camel to the water, but you cannot force it to drink”. As teachers in higher education, teaching will have to focus more on the “How it works” and “Why it works” of the topics, rather than the “How can I implement it”. The students’ understanding of the (role of the) acquired knowledge in the applicable context has to be the most important teaching goal.

But don’t we do this? Some people may already do so, but we still see many exam questions that focus on the student memorizing the content of the course, rather than understanding how to synthesize the answers through their understanding and their skills in reusing this understanding in transferring their knowledge to new domains.

I have in my teaching changed my examination of the students in my courses (one more theoretical, and two practical programming courses) changing the written examination into an oral “discussion”. That may sound like a lot of work, but in fact, it does not take more time than having a written exam. After 30 minutes of this “academic conversation” style of examination, I have most of the time no problem grading the student according to understanding and reasoning, rather than remembering a lot of details (which are most of the time forgotten fairly quickly after the course). This change was in fact introduced many years ago, way before the occurrence of chatGPT and similar systems.

The benefits here are also the new possibilities of actually allowing the students to use any kind of supportive tool, including in this case the chatGPT, for their projects and learning experiences. The only condition that they have to fulfill is that they themselves have to understand the answers they get from the various tools they use. In the programming courses, that, e.g., means they will have to explain any piece of code that they have not written all by themselves. They will also be told that errors that stem from the information source that remain, will affect their grades negatively. This of course applies to both text and code.

With this approach both to teaching and examination we will turn this risk of “cheating” into an improved pedagogical view of courses and the role of the teacher. Of course, it will still require the teacher to be well educated in the topic, in order to both teach and examine the students.

Lars Oestreicher

Exploring the Impact of Automation on Nurse Work Engagement in Patient-Centric Primary Care Services

The healthcare sector is no exception as we move towards a more digital future. With an aging population and increasing demand for healthcare services, there is a pressing need for patient-centric services that are efficient and effective. However, introducing new technology can also have unintended consequences for healthcare professionals, particularly nurses.

Research has shown that work engagement in healthcare is complex, with nurses often experiencing high levels of exhaustion and stress. In light of this, it is essential to study the effects of digitalization on work engagement in patient-centric services.

Person sitting in front of a computer

In this study, researchers conducted contextual and semi-structured interviews with nurses using a new chat function and telephone system to provide medical advice to patients. The results showed that the new chat function positively and negatively affected work engagement. While nurses experienced less time and emotional pressure, they also felt a loss of job control and feedback from colleagues working from home.

This research highlights the need for a more nuanced understanding of the impact of digitalization on work engagement in healthcare and the importance of considering both the positive and negative effects. By taking a user-centered approach, we can develop patient-centric services that improve efficiency and patient outcomes and support healthcare professionals’ well-being and engagement.

For more info – see full paper: https://link.springer.com/article/10.1007/s41233-020-00038-x

NIVA course: Digitalization, Automation, AI and the Future Sustainable Work Environment

A photo of a computer, a phone, a cup of coffee and a note book

Digitalization, automation, and AI have been instrumental in transforming the modern workplace, but these advancements have also brought new challenges. The increasing reliance on digital systems has resulted in inefficient work processes, safety risks, and employee stress. However, with better knowledge, development processes, and leadership, organizations can create a future work environment that is both efficient and sustainable.

This is where the online course “Digitalization, Automation, AI and the Future Sustainable Work Environment” comes in. This two-part course will provide participants with the tools and knowledge they need to create a work environment that is both efficient and sustainable. The course will consist of two two-day sessions held on May 3-4, 2023, and May 31-June 1, 2023, and will feature a series of lectures, discussions, and a small individual project. Magdalena Stadin, Åsa Cajander, and Bengt Sandblad will be the main teachers of the course.

Participants will learn about the limitations and challenges of current digital systems and will be encouraged to use material and experiences from their own organizations. The course will also cover best practices and strategies for implementing AI and automation in a healthy, sustainable, and efficient way.

This course is designed for professionals in various industries, including technology, healthcare, education, and finance. It is also recommended as a PhD course worth 5 ECTS credits.

For more information, please contact Project Manager Linda Oksanen at linda.oksanen@niva.org.

Don’t miss out on this unique opportunity to shape the future of work and make a positive impact on your organization and society at large. Enroll in “Digitalization, Automation, AI and the Future Sustainable Work Environment” today!

Newer posts »