Category: English (Page 3 of 7)

Security – but where is the usability?

I wrote earlier about the threats that are currently attacking our computer systems in the society, and we can also see that there are new attempts at increasing the security in the systems. However, there is an inherent problem in computer security, namely the transparency and usability of the systems. It seems that it is very difficult to create security systems that are easy to use. We have been used to writing our passwords on sticky notes and paste them on the screen, and we put our PIN codes on small paper notes in the wallet so that we will not forget them when we really need them. The reason for this is of course that the passwords, in order to be strong, have to be more than impossible to remember. Even worse is that in order to be really safe (according to the professional advice), you should have different passwords everywhere. And there are also all the many PIN codes to all the cards we have.

One important property within the human being is the delimitation of the memory. We have problems remembering meaningless things, such as the recommended password: “gCjn*wZEZK^gN0HGFg4wUAws”. So people tend to not have that kind of passwords, which of course leads to a decreased security. Well, in some sense we also solved this problem by adding two-step verification, i.e., the “if-you-don’t-have-your-phone-you-are-lost”-verification. This, of course, has to be interleaved with “find a motorcycle” or “find the traffic lights” games, to prove that you are not a robot (!).

Now it has become better, we have biometric security. We use the fingerprint or facial recognition methods. Only problem is that after a day of work in the garden, the fingerprints are no longer recognizable, and after a severe accident, the face may not look at all like yourself anymore, so you cannot call your family to say that you are OK. Well, at least it is safe, isn’t it?

Yes, but not when it comes to the current means for BankID, the virtual identification used in Sweden. Yes, of course it works when you want to log into your bank in order to handle your affairs. It is an accepted identification method. BUT not when you want to move your BankID to a new telephone! To do so, you now (after the last change) have to scan your passport or national ID-card. The most common means of identification, which in Sweden is your driver’s license, is on the other hand NOT accepted.

You might think that that should not be any problems, since everybody will surely have a passport today? But, no, that is not the case. As an anecdotal evidence I will relate my fathers situation:

My father just turned 90 years old. He is still a young man in an old body, so he has an iMac, an iPad, a laser printer, etc. at home. He is in fact a quite heavy tech user for his age. He also had an old smartphone that started to lose its battery charging, so he was given a new smartphone as one of the birthday presents. The transfer of the data went smoothly and without any hiccups, until it was time to use the BankID on the new phone. It was of course not transferred. Thus, we ordered a new BankID on his bank and signed it with his BankID on his new phone. But now…

Who is being excluded by the design?

My father decided to quit driving several years ago. However, he still kept his drivers license and even had it renewed without problems. Although being an ex-globetrotter he also reckoned that he needs no passport any more. So, when I asked him for an ID, he produced the drivers license. That did of course not work, although it is valid as identification in most other places. It was not an option to go to the bank and identify himself. They cannot validate the BankID. It has to be done through the web page and the app. Sorry!

So, now I have to take my father through the heavy cold and snow to the police station to have a new passport, which is only going to be used one single time, that is, in order to install the BankID. Where is the user friendly procedure in this?

I would think that my father is not the only person who is using the driver’s license as identification. I assume that many older people, for example, will have a similar problem when they need to get a new BankID (provided that they even use a smartphone).

Where is the human-friendly procedure for establishing the identity? Why can we, for example, no longer trust the people at the bank to identify a person with a valid identification and flick a switch to accept the ID? To make the issue a bit more general: Where has the consequence analysis gone when we make this kind of decisions? Or even better stated:

Who is going to be excluded by the new design or decision?

How vulnerable are we?

This post was actually started in late 2023, when the Swedish Church had become the victim of a cyberattack with ransomware, which took place November 22. The church organization at that time decided that it will not pay the ransom (in order not to make this a successful attack) but will instead recover the systems manually over time. However, this recovery takes a lot of time, and as long as the systems are not completely recovered, it is not possible to make any bookings for baptizing and weddings. In case of a funeral, it has still been possible to make a booking, but, the data had to be taken down using pen and paper (i.e., post-it notes).

We are very vulnerable if we only depend on our digital systems.

Head of information services at the Swedish church

In Sweden, the church has been separated from the government, but it is also still responsible for a number of national and regional bookkeeping services, like funerals. Also, a large number of people will still use the church services for baptizing and weddings, where in the latter case it also fulfills its role as an official administrative unit, in parallel with the weddings that are registered by the government. Suffice it to say that the church depends heavily on digital administration for its work. Consequently, some parts of the Swedish society also depends on the same computer systems being intact.

More attacks…

In 2024, there has now been a number of similar events, mostly through the use of ransomware, but also with overloading web servers. The systems affected this time have been in other organizations and governmental institutions. The most famous of them this time is probably the HR management system Primula, which is also used by the defense organizations and industries, among many others (including universities). This time the attacks are suspected to be made Russian hackers, possibly as part of a destabilization campaign as part of the ongoing war in Ukraine.

Again, the main issue is not that there have been attacks that have been successful, but rather that the backup systems are insufficient or, in most cases seemingly missing. Hopefully the systems will soon be up and running again, but if there is an attack on systems that are more central to the functions in society, then the problem is not only in small organizations, but may affect larger systems including systems for money transfers. Recently shops have been forced to close, when there have been longer problems with the money services.

In this context it is also important to point to the problem with paying. The Swedish Civil Contingencies Agency (MSB), which is responsible for helping society prepare for major accidents, crises and the consequences of war, recently sent out a message to the public, advising them to always have at list 2000 SEK in cash at home. The question is whether the society is prepared to revert to using cash money for the transactions. A large number of shops and services no longer accept cash as payment.

What now

When interviewed, the head of the information service for the Swedish church said that one lesson they have learned from this event is that they have to be less dependent on computer services than before. He did not specify how in any more detailed way, but the message was more or less clear: “We are very vulnerable if we only depend on our digital systems”. His conclusion is neither new, nor especially controversial. When our computer systems or the Internet fails, we are more or less helpless in many places. However, most of the time, the threats are envisioned in terms of disk crashes, physical damage or other similar factors. The increased risk of cyber attacks is not mentioned to the public to any larger extent.

We depend on our IT-support units to handle any possible interrupt as fast as possible, but the question is whether this is enough. Are there backups of the data? Are there backup systems that are ready to be launched in case the old system is failing? Are there backup non-computer based procedures that can replace the computer systems if there is a longer breakdown of the computer systems? Even if it is costly to maintain these backup systems/procedures, it is quite likely that we will need to add a higher level of security in order to not end up with a social disaster, where a large part of the society is essentially incapacitated.

What are the consequences?

We can just imagine what would happen if, as mentioned above, the central systems for bank transfers fails badly or gets “cyber-kidnapped”. Credit cards will not work, neither will mobile money transfers or other electronic payment options. There will be no way to pay our bills, and we may not even get the bills at first hand. Probably even the ATM machines will cease to work, so that there is no possibility to get cash either. Imagine now that this failure will last for days and weeks. What are the consequences?

But we don’t have to look at this national disaster scenario. It is enough to think about what will happen if the computer systems in universities or other large organizations are attacked by cyber-criminals. Not to mention the effects on critical health care, where minutes and seconds can count. Do we have any possibilities to continue the work, reaching journals or other important documents, schedule meetings, planning operations and other important events? Are we really ready to start working on paper again, if necessary? I fear not!

With the current situation in the world, with wars and possible also challenges from deteriorating environmental factors, a lack of emergency plans for our digital systems may not only be causing serious problems, but may really turn out to be disastrous in case of any larger international crisis. Looking at what happens around the world currently, it is easy to see that the risk for cyber-attacks in international crisis situations has increased to a high degree. In many cases the (possible) plans on how to proceed are not known to people who work in the organizations. Is your work protected? Do you know what to do if the systems fail?

Unfortunately, we cannot continue to hope that “this will never happen”. Even if the most extreme of the possible scenarios may not happen, we are still very vulnerable to attacks, e.g., with ransomware or “Denial of service” from “normal cyber-criminals” and this can be just as bad on the local scene, when a whole organization is brought to a halt due to a computer system failing badly. Therefore we need to be acting proactively in order to not be stuck if/when the systems fail. Because, it is quite certain that they will fail at some point of time.

And how will YOUR organization handle that kind of situation? Do YOU know?

A Path to a Brighter Future: Understanding the Relationship Between Software Quality and Sustainability

Image Source: Unsplash

Sustainable development is the development that meets the needs of the present without compromising the ability of future generations to meet their own needs.

Gro Harlem Brundtland, 1987

In recent years, sustainability has emerged as a critical concern in various domains. While environmental sustainability remains a focal point, sustainability also encompasses social and economic dimensions. In our technologically driven society our daily lives encompass many increasing digital needs; researching how to create sustainable software is important. This research area has many gaps to explore.

One aspect of software sustainability can be seen in this example: software that crashes frequently is not sustainable. The user will think this product is low-quality and will probably stop using it. However, the relationship between software quality and sustainability is not always this obvious. Also, sometimes trade-offs between sustainability and quality may be necessary, necessitating a comprehensive understanding of this relationship.

One area where software sustainability and quality have a positive relationship is cost efficiency. Well-crafted code typically requires less maintenance and suffers from fewer defects, translating to reduced operational costs over the software’s lifecycle. Moreover, code optimization and energy-efficient design further contribute to long-term savings, aligning with sustainability goals.

Software sustainability also encompasses social aspects, extending beyond technical considerations. Clean, understandable code not only facilitates collaboration among developers but also can foster a supportive community around the software. The societal impact can also include the software user if the software includes a social influence.

At the center of software sustainability lies the need to understand and address the needs of end-users. By prioritizing quality and sustainability, developers can deliver products that not only meet user expectations but also foster trust and loyalty among stakeholders. This user-centric approach enhances the software’s longevity and cultivates a sense of responsibility towards its societal and environmental impacts.

By embracing a focus on quality and sustainability, software products should be evaluated by more than their functionality. Focusing on sustainability and quality not only benefits end-users but also contributes to the well-being of companies, society, and the environment at large. I look forward to sharing more as the research progresses.

Maria Normark has joined the AROA project

Maria will be conducting research on work engagement within the realm of automation and AI, specifically within the field of rail traffic. Her particular interest lies in exploring the evolving division of labor between humans and technology, examining how AI will reshape work practices, and the potential risks of diminishing the meaningfulness of work. While automation has traditionally targeted repetitive and time-consuming tasks such as administration, monitoring, and manufacturing, the emergence of generative AI introduces new possibilities. This technology can now be applied in novel domains, offering solutions that partly replace professional intuition and creativity. Maria is interested in questions concerning the implications of this shift for future work engagement, how professionals will navigate this new landscape of labor division, and the role of embodied interaction within it.

As an associate professor in the Human-Computer Interaction (HCI) group at the Department of Informatics and Media, Uppsala University, Maria Normark’s research centers on fields such as critical design and Computer-Supported Cooperative Work (CSCW).

Research Update: Exploring Work Engagement in the Age of Automation, Robotics, and AI

In the fast-paced world of technology and automation, keeping a close eye on how these advancements affect the workforce’s engagement and dynamics is essential. The “ARbetsengagemang vid autOmatisering, robotisering och AI” (AROA) project aims to illuminate this crucial aspect of our work. In this blog post, we’ll delve into the project’s first-year report and its progress.

AROA’s journey began with a literature review, where we scoured existing knowledge. This review aimed to identify critical knowledge gaps and relevant research to serve as the project’s foundation.

Collaboration is at the heart of AROA’s approach. In pursuit of comprehensive insights, they formed a reference group of stakeholders from various sectors. This diverse group of participants would be instrumental in shaping the project’s direction. AROA organised a dynamic full-day workshop to facilitate open dialogue and receive feedback.

Moreover, we did field studies to understand the real-world impact of automation and AI by conducting in-depth interviews and fieldwork within the agriculture and railway sectors. These empirical studies offered a closer look at how workers in these sectors were experiencing the transformative effects of technology firsthand.

In August 2023, AROA welcomed a doctoral student, strengthening their research capabilities. Additionally, the addition of Associate Professor Maria Normark brings even more depth to the project’s knowledge base.

These highlighted areas showcase AROA’s first-year progress. As the project evolves, it continues to illuminate the nature of work engagement in the automation, AI, and robotics age.

A Supportive Tool Nobody Talks About?

Information Technology is now being developed in a pace which is almost unbelievable. This is of course not always visible in the shape of computers, but also in the form of embedded technology, for example, in cars, home assistants, phones and so on. Much of this development is currently either driven by pure curiosity, or to cater for some perceived (sometimes imagined) user need. Among the former we may count the first version of the chatbots, where the initial research was mostly looking at how very large language models could be created. Once people became aware of the possibilities, that created a need for a service driven by the results, resulting in the new tools that are now being developed as a result.

Among the latter versions we have the development in the car industry as one example. Security has been a key issue among both drivers and car manufacturers for a long time. Most, if not all new cars today are equipped with intelligent non-locking brakes, anti-spin regulators, and even sensors that will support the driver in not crossing the lane borders involuntarily. The last feature is in fact more complex than might be thought at the beginning. Any course correction must be made in such a way that the driver feels that he or she is still in control of the car. The guiding systems have to interact seamlessly with the driver.

But there is one other ongoing development according to the latter version, which we can already see will have larger consequences for the society and for people in general. This development is also always announced as being of primary importance for the general public (at least for people with some financial funds). This is a product that resides at the far end of the ongoing development of car security systems. I am talking about the development of self-driving cars. The current attempts are still not successful enough to allow these cars to be let completely free in the normal traffic.

There are, however, some positive examples already, such as autonomous taxis in Dubai, and there are several car-driving systems that almost manage to behave in a safe manner. This is still not enough, as there have been a number of accidents with cars running in self-driving mode. But even when the cars become safe enough, one of the main remaining problems is the issue of responsibility. Who is responsible in the event of an accident? Currently, the driver is in most cases still responsible, since the law says that you cannot cease being aware of what happens in the surroundings. But, we are rapidly going towards a future where self driving cars may finally be a reality.

Why do we develop self-driving cars?

Enters the question: “Why?”. As a spoil sport I actually have to ask, why do we develop self driving cars? In the beginning, there was, of course, the curiosity aspect. There were competitions where cars were set to navigate long distances without human interception. But now, it seems to become more of a competition factor between car manufacturers. Who will be the first car producer that will cater for “the lazy driver who does not want to drive”?

It is, in fact, quite seldom that we hear any longer discussions about the target users of self-driving cars. For whom are they being developed? For the rich lazy driver? If so, that is in my opinion a very weak motivation. Everybody will of course benefit from a safer driving environment, and when (if) there is a time when there will only be self driving cars in the streets, it might be good for everybody, including cyclists and pedestrian. One other motivation mentioned has been that there are people who are unable to get a driver’s license, who would now be able to use a car.

But there is one group (or rather a number of groups) of people who would really benefit from this development when it is progressing further. Who are these people? Well, it is not very difficult to see that a group of people who would benefit the most of having self driving cars are people with severe impairments, and not least people with severe visual impairments. Today, blind people (among many others) are completely dependent on other people for their transport. In a self driving car, they could instead be free to go anywhere, anytime, just like everyone else today (if you have a valid driver’s license). This is in one sense the definition of freedom, as an extension of independence.

Despite this, we never hear this as an argument for the development of this fantastic supportive tool (as in fact, it could be). It is, as mentioned above, mostly presented as an interesting feature for techno-nerdy, rich and lazy drivers, who do not want to take the effort of driving themselves. Imagine what would happen if we could motivate this research from the perspective of supportive tools. Apart from raising the hope for millions of people who cannot drive, there would as a result also be millions of potential, eager new buyers only in the category of people who are blind or who have severe visual impairments. Adding to this also the possibility for older people who now have to stop driving due to age-related problems, who can now use the car much longer to a great personal benefit.

The self-driving car is indeed a very important supportive tool, and therefore I strongly support the current development!

This is, however, just one case among many others, on how we can motivate research also as a development of supportive tools. We just have to see the potentials in the research. Artificial Intelligence methods will allow us to “see” things without the help of our eye, make prostheses that move by will, and support people with dyslexia to read and write texts in a better way.

All it takes is a little bit of thinking outside of the box, some extra creativity, and, of course good knowledge about impairments and about the current rapid developments within (Information) Technology.

Celebrating One Year of HTO Research Group Blogging: A Recap

As the year ends and the holiday season is upon us, we want to take a moment to wish all our readers a Merry Christmas and a Happy New Year! It’s been an incredible journey for the HTO Research Group blog, and as we celebrate one year of sharing our research insights and findings with you, we’d like to reflect on the events and articles we’ve covered over the past year.

In the past year, we have published 40 blog posts covering various topics in human-computer interaction, technology, and work environment. We’ve shared our research findings, insights, and experiences with you, our readers.

Highlights from the Past Year

Vision Seminars: Pioneering User-Centric IT Design

Our commitment to user-centric IT design was highlighted in a blog post by Åsa Cajander. She discussed the long-standing tradition of conducting Vision Seminars within our research projects, showcasing how this innovative approach has shaped our engagement with technology and work systems design.

AI for Humanity and Society 2023 Conference

In November, our blog covered the annual conference on “AI for Humanity and Society 2023 and Human Values” held in Malmö. Andreas Bergqvist provided an insightful recap of the conference, which featured three keynotes and panels addressing critical issues surrounding AI’s impact on society. The discussions delved into criticality, norms, and interdisciplinarity.

AI4Research Fellowship 2024

A significant milestone was achieved when Åsa Cajander announced her participation in the AI4Research Fellowship. This five-year initiative at Uppsala University aims to advance AI and machine learning research, and we are honored to be part of it.

Exciting New EDU-AI Project

We also announced the commencement of a new research project, the EDU-AI project, which explores the transformative impact of generative AI technologies on education. Starting in April 2024, this project will address critical issues related to digital workplace health and usability in IT systems.

Exploring the Future of Healthcare: Insights from MIE’2023 and VITALIS 2023

Sofia shared insights from two significant healthcare conferences, MIE’2023 and VITALIS 2023. The blog post explored the advancements and challenges in healthcare, focusing on AI’s role in shaping the future of healthcare.

Empowering People with Anxiety: Biofeedback-based Connected Health Interventions

Sofia explored the growing issue of anxiety in today’s fast-paced world and introduced biofeedback-based connected health interventions as a potential solution. The blog post highlighted the significance of connected health approaches in addressing anxiety.

TikTok – What is the Problem?

Lars addressed the concerns surrounding the social media platform TikTok, particularly from a security perspective. He discussed the potential dangers and implications of TikTok’s usage, emphasizing the need for awareness and caution.

Writing Retreat with the HTO Group

Rebecca Cort shared the HTO research group’s tradition of hosting writing retreats and discussed the importance of creating dedicated time and space for focused writing. The blog post highlighted the group’s commitment to productive research.

Insightful Publications

Throughout the year, we shared several research papers and publications, each providing valuable insights into various aspects of technology and its impact on our lives. From the effects of AI on work engagement to the challenges faced by caregivers of cancer patients, our research has covered a wide range of topics.

Looking Ahead

As we enter the new year, we are excited about the continued growth of the HTO Research Group blog. We have more research findings, insights, and events to share with you, and we look forward to engaging with our readers in meaningful discussions.

We wish you a joyous holiday season, and may the new year bring you happiness and discoveries.

Happy Holidays and a Prosperous New Year from the HTO Research Group!

HTO Coverage: AI for Humanity and Society 2023 and human values

Mid of November in Malmö, WASP-HS held its annual conference on how AI affects our lives as it becomes more and more entwined in our society. The conference consisted of three keynotes and panels on the topics of criticality, norms, and interdisciplinarity. This blog post will recap the conference based on my takeaways regarding how AI affects us and our lives. As a single post, it will be too short to capture everything that was said during the conference, but that was never the intention anyway. If you don’t want to read through this whole thing, my main takeaway was that we should not rely on the past through statistics and data with its biases to solve the problems of AI. Instead, we should, when facing future technology, consider what human values we want to protect and consider how that technology can be designed to help and empower these values.

The first keynote was on criticality and by Shannon Vallor. It discussed metaphors for AI, she argued for the metaphor of mirrors instead of the myth that media might portray it as. We know a lot about how AI work, it is not a mystery. AI is technology that reflects our values and what we put into it. When we look at it and see it as humane, it is because we are looking for our reflection. We are looking for us to be embodied in it and anything it does is built on the distortion of our data. Data that is biased and flawed, mirroring our systematic problems in society and its lack of representation. While this might give of an image of intelligence or empathy, that is just what it is: an image. There is no intelligence or empathy there. Only the prediction of what would appear empathical or intelligent. Vallor likened us to Narcissus, caught in the reflection of ourselves that we so flawedly built into the machine. Any algorithm or machine learning model will be more biased than the data it is built on as it draws towards the norms of biases in the data. We should sort out what our human morals are and take biases into account in any data we use. She is apparently releasing a book on the topic of the AI metaphor and I am at least curious to read it after hearing her keynote. Two of the points that Vallor ends on is that people on the internet usually have a lot to say about AI while knowing very little and that we need new educations which teach what is human-centered so that it does not get lost between the tech that is pushed.

The panel on criticality was held between Airi Lampinen, Amanda Lagerkvist, and Michael Strange. Some of the points that were raised related to that we shouldn’t rush technology, that reductionistic view of a lot of the industry will miss the larger societal problems, novelty is a risk, we should worry about what boxes we are put into, and what human values do we want to preserve from technology? Creating new technology just because we can is not the real reason, it is always done for someone. Who would we rather it was for? Society and humanity perhaps? The panelists argued it would be under the control of the market forces without interventions and stupid choices are made because they looked good at the time.

The second keynote on norms and values was by Sofia Ranchordas who discussed the clash between administrative law, which is about protecting individuals from the state, and digitalization and automation, which builds on statistics that hides individuals categorised into data and groups, and the need to rehumanize its regulation. Digitalization is designed for the tech-savvy man and not the average citizen. But it is not even the average citizen that needs these functions of society the most, it is the extremes and outliers and they are even further from being tech-savvy men. We need to account for these extremes, through human discretion, empathy, vulnerability, and forgiveness. Decision making systems can be fallible, but most won’t have the insight to see it. She ended on that we need to make sure that technology don’t increase the asymmetries of society.

The panel that followed consisted of Martin Berg, Katja De Vries, Katie Winkle, and Martin Ebers. The presentations of the participants raised topics such as why do people think AI is sentient and fall in the trap of antromorphism, that statistics cannot solve AI as they are built on different epistemologies and those that push it want algorythmic bias as they are the winners of the digital market, the practical implications of limits of robots available at the market to use in research, and the issues in how we assess risk in regulation. The following discussion included that law is more important than ethical guidelines for protecting basic rights and that it both too early to regulate and we don’t want technology to cause actual problems before we can regulate it. A big issue is also the question if we are regulating the rule-based systems we have today or the technological future of AI. It is also important to remember that not all research and implementation of AI is problematic, as lot of research into robotics and automation is for a better future.

The final keynote was by Sarah Cook and about the interdisciplinar junction between AI and art. It brought up a lot of different examples of projects in this intersection such as: Ben Hamm’s Catflap, ImageNet Roulette, Ian Cheng’s Bad Corgi, and Robots in Distress to highlight a few. One of the main points in the keynote was shown through Matthew Biederman’s A Generative Adversarial Network; generative AI is ever reliant on human input data as it implodes when endlessly being fed its own data.

The final panel was between Irina Shklovski, Barry Brown, Anne Kaun, and Kivanç Tatar. The discussion raised topics as questioning the need of disciplinarity and how you deal with conflicting epistemologies, the failures of interdisciplinarity as different disciplines rarely acknowledges each other, how do you deal with different expectations of contributions and methods when different fields met, and interdisciplinar work often ends up being social scientific. A lot of, or most work, in HCI tends to end up in some regard to be interdisciplinary. As an example from the author, Uppsala University has two different HCI research groups, one at the faculty of natural sciences and one at the faculty of social sciences, while neither fits perfectly in. The final discussion was on the complexities of how to deal with interdisciplinarity as a PhD student. It was interesting and thought-provoking, as a PhD student in a interdisciplinary field, to hear the panelists and audience members bringing up their personal experiences of such problems. I might get back to the topic in a couple of years when my studies draws to a close to forward the favour and tell others about my experiences so that others can learn from them as well.

Overall, it was an interesting conference highlighting the value of not forgetting what we value in humanity and what human values we want to protect in today’s digital and automated transformation.

Vision Seminars: Pioneering User-Centric IT Design

Our research has proudly upheld a long-standing tradition of conducting Vision Seminars within the scope of action research projects. This innovative approach, predominantly led by Bengt Sandblad, has significantly shaped how we engage with technology and work systems design. Niklas Hardenborg’s doctoral thesis further exemplifies our commitment to this approach, which delves deeply into designing work and IT systems through participatory processes, with a strong focus on usability and sustainability.

Over the years, we’ve produced an impressive array of studies and papers demonstrating the diversity and depth of our engagement with Vision Seminars. Our works, authored by researchers like Åsa Cajander, Marta Larusdottir, Thomas Lind, Magdalena Stadin, Mats Daniels, Robert McDermott, Simon Tschirner, Jan Gulliksen, Elina Eriksson, and Iordanis Kavathatzopoulos, span a wide range of topics. These range from in-depth explorations of user involvement in extensive IT projects, as seen in our latest publication on vision seminars called “Experiences of Extensive User Involvement through Vision Seminars in a Large IT Project,” to more focused case studies in areas such as university education administration and the development of train driver advisory systems for improved situational awareness.

A key theme that runs through our studies is the vital role of users in shaping the future of technology and work practices. Papers like “The Use of Scenarios in a Vision Seminar Process” and “Students Envisioning the Future” underscore the proactive role of participants in moulding future digital work environments. Our approach is distinctively collaborative, inviting various stakeholders to craft visions guiding user-centred systems’ evolution.

Our research extends beyond examining specific sectors or systems. It addresses the larger methodological and organizational changes necessary to enhance usability and the digital work environment. “User-centred systems design as organizational change,” by Gulliksen and others, is a prime example of this broader view, embedding user-centred design into the very fabric of organizational processes and culture.

In summary, our body of work contributes significantly to the field of Human-Computer Interaction and sets a benchmark in involving users in the technological design process. Through Vision Seminars, we continue to champion a participatory, user-centred approach in systems design, aiming to create more usable, sustainable, and future-oriented IT systems and work practices. This commitment cements our position as pioneers in the field, constantly pushing the boundaries of how user involvement can shape the technological landscape.

Some of our Research Papers
on Vision seminars

Cajander, Å., Larusdottir, M.,
Lind, T., & Stadin, M. (2023). Experiences of Extensive User Involvement
through Vision Seminars in a Large IT Project. Interacting with
Computers
, iwad046.

Cajander, Å., Sandblad, B., Lind, T., Daniels, M., & McDermott, R.
(2015). Vision Seminars and Administration of University Education–A Case
Study. Paper! Sessions!!, 29.

Lind, T., Cajander, Å., Björklund, A., & Sandblad, B. (2020, October).
The Use of Scenarios in a Vision Seminar Process: The Case of Students
Envisioning the Future of Study-Administration. In Proceedings of the
11th Nordic Conference on Human-Computer Interaction: Shaping Experiences,
Shaping Society
 (pp. 1-8).

Lind, T., Cajander, Å., Sandblad, B., Daniels, M., Lárusdóttir, M.,
McDermott, R., & Clear, T. (2016, October). Students envisioning the
future. In 2016 IEEE Frontiers in Education Conference (FIE) (pp.
1-9). IEEE.

Cajander, Å., Sandblad, B., Lind, T., Daniels, M., & McDermott, R.
(2015). Vision Seminars and Administration of University Education–A Case
Study. Paper! Sessions!!, 29.

Tschirner, S., Andersson, A. W., & Sandblad, B. (2013). Designing train
driver advisory systems for situation awareness. Rail Human Factors:
Supporting reliability, safety and cost reduction. Taylor & Francis, London
,
150-159.

Tschirner, S., Andersson, A. W., & Sandblad, B. (2013). Designing train
driver advisory systems for situation awareness. Rail Human Factors:
Supporting reliability, safety and cost reduction. Taylor & Francis, London
,
150-159.

Gulliksen, J., Cajander, Å., Sandblad, B., Eriksson, E., &
Kavathatzopoulos, I. (2009). User-centred systems design as organizational
change: A longitudinal action research project to improve usability and the
computerized work environment in a public authority. International
Journal of Technology and Human Interaction (IJTHI)
5(3),
13-53.

Hardenborg, N. (2007). Designing work and IT systems: A
participatory process that supports usability and sustainability
 (Doctoral
dissertation, Acta Universitatis Upsaliensis).

Hardenborg, N., & Sandblad, B. (2007). Vision Seminars–Perspectives on
Developing Future Sustainable IT Supported Work. Journal of Behaviour
& Information Technology, Taylor & Francis
.

Olsson, E., Johansson, N., Gulliksen, J., & Sandblad, B. (2005). A
participatory process supporting design of future work.

New Publication: Shaping the Future of IT Projects: Insights from Vision Seminars

In the ever-evolving world of information technology, understanding and incorporating user needs has never been more crucial. This is the crux of a study titled “Experiences of Extensive User Involvement through Vision Seminars in a Large IT Project,” authored by Åsa Cajander, Marta Larusdottir, Thomas Lind, and Magdalena Stadin. Their research delves into the impactful role of Vision Seminars (VS) in steering large IT projects towards success.

Information about the paper:
Cajander, Å., Larusdottir, M., Lind, T., & Stadin, M. (2023). Experiences of Extensive User Involvement through Vision Seminars in a Large IT Project. Interacting with Computers, iwad046.
Found here.

A New Approach to IT Development

The digital landscape is complex and demands methods that consider the full spectrum of the user’s work environment. The study by Cajander and her colleagues focuses on the Vision Seminar process, a method designed to address future technology use in intricate digital work settings. Read more here. This approach is not just about technology; it’s about understanding how people interact with these systems in their daily work lives.

Revelatory Findings

The research revealed several key insights:

  • User-Centric Success: Participants in the Vision Seminars reported a newfound holistic understanding of their work. This broader perspective led to the discovery of more effective methods of support.
  • Feasibility of Future Visions: The study highlighted the participants’ belief in the practicality and desirability of envisioned future IT systems.
  • Integration Challenges: A notable revelation was the difficulty of embedding user-centric methods in fast-paced software development environments.

Methodology

The study’s mixed-methods approach, utilizing surveys and interviews, offered a rich, multi-dimensional understanding of the impact of Vision Seminars. This comprehensive method ensures robust findings and reflects diverse experiences and opinions.

Practical Applications for the Real World

What does this mean for the IT industry? The findings underscore the importance of involving users in developing IT systems. This involvement enhances user satisfaction and can also guide the direction of IT projects more effectively.

Addressing the Challenges

Despite the positive outcomes, the Vision Seminar process has challenges. The time and resources required for such extensive user involvement can pose significant difficulties in smaller or more technology-centric projects.

Concluding Thoughts

This study is crucial to our understanding of user involvement in IT development. It reinforces the notion that the future of IT systems must be shaped by those who use them, ensuring that technology serves people, not the other way around.

Acknowledgements

This research was made possible through the support of AFA.

« Older posts Newer posts »