Author: Andreas Bergqvist

HTO Coverage: AI for Humanity and Society 2023 and human values

Mid of November in Malmö, WASP-HS held its annual conference on how AI affects our lives as it becomes more and more entwined in our society. The conference consisted of three keynotes and panels on the topics of criticality, norms, and interdisciplinarity. This blog post will recap the conference based on my takeaways regarding how AI affects us and our lives. As a single post, it will be too short to capture everything that was said during the conference, but that was never the intention anyway. If you don’t want to read through this whole thing, my main takeaway was that we should not rely on the past through statistics and data with its biases to solve the problems of AI. Instead, we should, when facing future technology, consider what human values we want to protect and consider how that technology can be designed to help and empower these values.

The first keynote was on criticality and by Shannon Vallor. It discussed metaphors for AI, she argued for the metaphor of mirrors instead of the myth that media might portray it as. We know a lot about how AI work, it is not a mystery. AI is technology that reflects our values and what we put into it. When we look at it and see it as humane, it is because we are looking for our reflection. We are looking for us to be embodied in it and anything it does is built on the distortion of our data. Data that is biased and flawed, mirroring our systematic problems in society and its lack of representation. While this might give of an image of intelligence or empathy, that is just what it is: an image. There is no intelligence or empathy there. Only the prediction of what would appear empathical or intelligent. Vallor likened us to Narcissus, caught in the reflection of ourselves that we so flawedly built into the machine. Any algorithm or machine learning model will be more biased than the data it is built on as it draws towards the norms of biases in the data. We should sort out what our human morals are and take biases into account in any data we use. She is apparently releasing a book on the topic of the AI metaphor and I am at least curious to read it after hearing her keynote. Two of the points that Vallor ends on is that people on the internet usually have a lot to say about AI while knowing very little and that we need new educations which teach what is human-centered so that it does not get lost between the tech that is pushed.

The panel on criticality was held between Airi Lampinen, Amanda Lagerkvist, and Michael Strange. Some of the points that were raised related to that we shouldn’t rush technology, that reductionistic view of a lot of the industry will miss the larger societal problems, novelty is a risk, we should worry about what boxes we are put into, and what human values do we want to preserve from technology? Creating new technology just because we can is not the real reason, it is always done for someone. Who would we rather it was for? Society and humanity perhaps? The panelists argued it would be under the control of the market forces without interventions and stupid choices are made because they looked good at the time.

The second keynote on norms and values was by Sofia Ranchordas who discussed the clash between administrative law, which is about protecting individuals from the state, and digitalization and automation, which builds on statistics that hides individuals categorised into data and groups, and the need to rehumanize its regulation. Digitalization is designed for the tech-savvy man and not the average citizen. But it is not even the average citizen that needs these functions of society the most, it is the extremes and outliers and they are even further from being tech-savvy men. We need to account for these extremes, through human discretion, empathy, vulnerability, and forgiveness. Decision making systems can be fallible, but most won’t have the insight to see it. She ended on that we need to make sure that technology don’t increase the asymmetries of society.

The panel that followed consisted of Martin Berg, Katja De Vries, Katie Winkle, and Martin Ebers. The presentations of the participants raised topics such as why do people think AI is sentient and fall in the trap of antromorphism, that statistics cannot solve AI as they are built on different epistemologies and those that push it want algorythmic bias as they are the winners of the digital market, the practical implications of limits of robots available at the market to use in research, and the issues in how we assess risk in regulation. The following discussion included that law is more important than ethical guidelines for protecting basic rights and that it both too early to regulate and we don’t want technology to cause actual problems before we can regulate it. A big issue is also the question if we are regulating the rule-based systems we have today or the technological future of AI. It is also important to remember that not all research and implementation of AI is problematic, as lot of research into robotics and automation is for a better future.

The final keynote was by Sarah Cook and about the interdisciplinar junction between AI and art. It brought up a lot of different examples of projects in this intersection such as: Ben Hamm’s Catflap, ImageNet Roulette, Ian Cheng’s Bad Corgi, and Robots in Distress to highlight a few. One of the main points in the keynote was shown through Matthew Biederman’s A Generative Adversarial Network; generative AI is ever reliant on human input data as it implodes when endlessly being fed its own data.

The final panel was between Irina Shklovski, Barry Brown, Anne Kaun, and Kivanç Tatar. The discussion raised topics as questioning the need of disciplinarity and how you deal with conflicting epistemologies, the failures of interdisciplinarity as different disciplines rarely acknowledges each other, how do you deal with different expectations of contributions and methods when different fields met, and interdisciplinar work often ends up being social scientific. A lot of, or most work, in HCI tends to end up in some regard to be interdisciplinary. As an example from the author, Uppsala University has two different HCI research groups, one at the faculty of natural sciences and one at the faculty of social sciences, while neither fits perfectly in. The final discussion was on the complexities of how to deal with interdisciplinarity as a PhD student. It was interesting and thought-provoking, as a PhD student in a interdisciplinary field, to hear the panelists and audience members bringing up their personal experiences of such problems. I might get back to the topic in a couple of years when my studies draws to a close to forward the favour and tell others about my experiences so that others can learn from them as well.

Overall, it was an interesting conference highlighting the value of not forgetting what we value in humanity and what human values we want to protect in today’s digital and automated transformation.

HTO Coverage: Led by Machines and differing perspectives.

75% of companies in EU should use AI, big data, or the cloud by 2030. It is one of the targets that the European Commission has declared with their Digital Decade policy programme (European Commission 2023). While a lot of research has been and is currently performed to study AI and work, there is a noticeable gap in the research on the effects of AI and automation on working environment and conditions (Cajander et al. 2022). Our work in the TARA and AROA projects aim to continue to bridge this gap but we are not the only one who currently work to do so. This blog post will act as HTO coverage1 of one such initiative.

Last week, I attended the conference Led by Machines in Stockholm. The conference was the launch of a new international research initiative to study how algorithmic management affect the nature of work and workers experiences. The main focus of the conference were a set of keynotes and panels that covered the need for research on the topic and previous work that had already been done. But, my main takeaway was that it highlighted the different perspectives in play in this domain. The conference brought together trade unionists, policymakers and researchers from different fields which led to that the implications were discussed from macro, meso, as well as micro-perspectives. Coming from human-computer interaction and user centred design, I am used to study the micro-level of how individuals use and are affected by the use of technologies. In contrast to this, most of those I spoke to at the conference worked at the macro-level; e.g. a political science researcher that discussed how policy and regulation around technology is decided on and an economics researcher that studied the impact of AI on changes in the labour market. Others worked with topics at a meso-level; e.g. a trade unionist discussed the effects on social relations and the role of middle management in organisations. The swift adoption of new technology that we stand before can have unforeseen consequence across these different levels. As such, it is great that researchers and other stakeholders interested in questions and problems at different levels can come together and work toward gaining a better understanding of the knowledge gap together.

1) The Swedish word “omvärldsbevakning” (lit. “monitoring of the surrounding world”) is often translated to environmental scanning or business intelligence. Both alternative translations comes with different connotations and implications that does not align clearly with research in general or the topic at hand. In the case that this becomes a returning series of blog posts, I instead refer to it as HTO coverage as it will provide coverage of topics related to Human, Technology, and Organisation that occur outside of our research group.

References:

Cajander, Å., Sandblad, B., Magdalena, S., & Elena, R. (2022). Artificial intelligence, robotisation and the work environment. Swedish Agency for Work Environment Expertise. Retrieved September 29th from https://sawee.se/publications/artificial-intelligence-robotisation-and-the-work-environment/

European Commission. (2023) Europe’s Digital Decade: digital targets for 2030. Retrieved September 29th from https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/europes-digital-decade-digital-targets-2030_sv

The author has no affiliation with the organisations that organised the Led by Machines conference.