Mid of November in Malmö, WASP-HS held its annual conference on how AI affects our lives as it becomes more and more entwined in our society. The conference consisted of three keynotes and panels on the topics of criticality, norms, and interdisciplinarity. This blog post will recap the conference based on my takeaways regarding how AI affects us and our lives. As a single post, it will be too short to capture everything that was said during the conference, but that was never the intention anyway. If you don’t want to read through this whole thing, my main takeaway was that we should not rely on the past through statistics and data with its biases to solve the problems of AI. Instead, we should, when facing future technology, consider what human values we want to protect and consider how that technology can be designed to help and empower these values.
The first keynote was on criticality and by Shannon Vallor. It discussed metaphors for AI, she argued for the metaphor of mirrors instead of the myth that media might portray it as. We know a lot about how AI work, it is not a mystery. AI is technology that reflects our values and what we put into it. When we look at it and see it as humane, it is because we are looking for our reflection. We are looking for us to be embodied in it and anything it does is built on the distortion of our data. Data that is biased and flawed, mirroring our systematic problems in society and its lack of representation. While this might give of an image of intelligence or empathy, that is just what it is: an image. There is no intelligence or empathy there. Only the prediction of what would appear empathical or intelligent. Vallor likened us to Narcissus, caught in the reflection of ourselves that we so flawedly built into the machine. Any algorithm or machine learning model will be more biased than the data it is built on as it draws towards the norms of biases in the data. We should sort out what our human morals are and take biases into account in any data we use. She is apparently releasing a book on the topic of the AI metaphor and I am at least curious to read it after hearing her keynote. Two of the points that Vallor ends on is that people on the internet usually have a lot to say about AI while knowing very little and that we need new educations which teach what is human-centered so that it does not get lost between the tech that is pushed.
The panel on criticality was held between Airi Lampinen, Amanda Lagerkvist, and Michael Strange. Some of the points that were raised related to that we shouldn’t rush technology, that reductionistic view of a lot of the industry will miss the larger societal problems, novelty is a risk, we should worry about what boxes we are put into, and what human values do we want to preserve from technology? Creating new technology just because we can is not the real reason, it is always done for someone. Who would we rather it was for? Society and humanity perhaps? The panelists argued it would be under the control of the market forces without interventions and stupid choices are made because they looked good at the time.
The second keynote on norms and values was by Sofia Ranchordas who discussed the clash between administrative law, which is about protecting individuals from the state, and digitalization and automation, which builds on statistics that hides individuals categorised into data and groups, and the need to rehumanize its regulation. Digitalization is designed for the tech-savvy man and not the average citizen. But it is not even the average citizen that needs these functions of society the most, it is the extremes and outliers and they are even further from being tech-savvy men. We need to account for these extremes, through human discretion, empathy, vulnerability, and forgiveness. Decision making systems can be fallible, but most won’t have the insight to see it. She ended on that we need to make sure that technology don’t increase the asymmetries of society.
The panel that followed consisted of Martin Berg, Katja De Vries, Katie Winkle, and Martin Ebers. The presentations of the participants raised topics such as why do people think AI is sentient and fall in the trap of antromorphism, that statistics cannot solve AI as they are built on different epistemologies and those that push it want algorythmic bias as they are the winners of the digital market, the practical implications of limits of robots available at the market to use in research, and the issues in how we assess risk in regulation. The following discussion included that law is more important than ethical guidelines for protecting basic rights and that it both too early to regulate and we don’t want technology to cause actual problems before we can regulate it. A big issue is also the question if we are regulating the rule-based systems we have today or the technological future of AI. It is also important to remember that not all research and implementation of AI is problematic, as lot of research into robotics and automation is for a better future.
The final keynote was by Sarah Cook and about the interdisciplinar junction between AI and art. It brought up a lot of different examples of projects in this intersection such as: Ben Hamm’s Catflap, ImageNet Roulette, Ian Cheng’s Bad Corgi, and Robots in Distress to highlight a few. One of the main points in the keynote was shown through Matthew Biederman’s A Generative Adversarial Network; generative AI is ever reliant on human input data as it implodes when endlessly being fed its own data.
The final panel was between Irina Shklovski, Barry Brown, Anne Kaun, and Kivanç Tatar. The discussion raised topics as questioning the need of disciplinarity and how you deal with conflicting epistemologies, the failures of interdisciplinarity as different disciplines rarely acknowledges each other, how do you deal with different expectations of contributions and methods when different fields met, and interdisciplinar work often ends up being social scientific. A lot of, or most work, in HCI tends to end up in some regard to be interdisciplinary. As an example from the author, Uppsala University has two different HCI research groups, one at the faculty of natural sciences and one at the faculty of social sciences, while neither fits perfectly in. The final discussion was on the complexities of how to deal with interdisciplinarity as a PhD student. It was interesting and thought-provoking, as a PhD student in a interdisciplinary field, to hear the panelists and audience members bringing up their personal experiences of such problems. I might get back to the topic in a couple of years when my studies draws to a close to forward the favour and tell others about my experiences so that others can learn from them as well.
Overall, it was an interesting conference highlighting the value of not forgetting what we value in humanity and what human values we want to protect in today’s digital and automated transformation.