In this 2nd edition of the Thought Leaders series, I’m talking with Marisa Tschopp. She is a researcher at scip AG and holds various positions in the field of AI. She is an Ambassador and Chief Research Officer at Women in AI NPO, and Co-Chair of the IEEE Trust and Agency in AI Systems committee.
Her research interests include trust, performance measurement of conversational AI, agency, leadership, and issues of gender equality in AI. She taught at universities in Germany, Canada, and Switzerland, and published various articles, book chapters, and essays. She is an associate researcher in the Social Processes Lab at the Leibniz Institut für Wissensmedien (IWM) where she investigates the perceived relationships between humans and machines.
In this interview, we discuss the difference between trust and trustworthiness, and how technology shapes the relationships we have with our devices.
Kicking off, why is it so hard for computers to understand human language?
I am not a computer scientist, I am a psychologist. I find it much more interesting to find out why it is so hard for humans to understand computers ‘language’. To be more concrete, it keeps fascinating me, why we cannot get our heads around the fact that even if computers speak to us in our language, it doesn’t mean they communicate in the same social, emotional and ‘meaning’-ful way as we do with other humans. Computers are in fact tools, assisting us in a wide variety of tasks, but still, we do not — or rather almost cannot — perceive them as just tools.
What is the ‘CASA paradigm’ and how does it impact conversational AI?
We cannot not anthropomorphize. Humans have a tremendous capacity to see the human in all kinds of non-human agents, from teddy bears to deities in machines. The CASA paradigma — Computers As Social Actors — in essence tells us that we people use social scripts we learned in our lives of interacting with other humans also when we interact with machines.
Especially, when these machines are able to talk to us in our language. For example, we may be more inclined to share personal information with a social chatbot like Replika AI, or we may develop a deep connection with it. In some cases, this connection can even lead to addiction.
“Computers are in fact tools, assisting us in a wide variety of tasks, but still, we do not – or rather almost cannot – perceive them as just tools.”
Trust is a big topic in conversational AI. What is the role of trust and trustworthiness in human-computer interaction?
First of all, it is important to understand the major difference between these two terms. Trust is an attitude of a human being. It is always directed towards a goal (e.g., I trust product X to do Y). When we talk about trust the situation is always characterized by uncertainty, risk or fear — there is often a risk of getting hurt. This is why with humans’ trust is a gift which is given, because we make ourselves vulnerable as trust givers.
When we rely on machines we also make ourselves vulnerable, because machines are never perfect, and there is some risk of getting “hurt”. That’s why we need trustworthiness. Trustworthiness basically refers to properties of a machine (e.g., performance indicators or transparent processes) and is actually a pretty technical term. Companies should always focus on trustworthiness and never ask how to increase trust in AI — this is much too close to manipulation; or ‘trust gamification’ as I like to call it.
We live in a time where our devices have started to talk back. They can take on the role of mentor, friend, mental health coach, or even virtual lover. What do you think the future holds?
I usually try to avoid questions about the future of AI. It often ends in a debate that seems to be stuck in extremes: on the one hand the Nostradamus followers who proclaim the end of the world and on the other hand the tech evangelists promising salvation and AI as the solution for all problems of the world (i.e. techno-solutionism).
We often hear about the radical changes AI will bring, but I don't like this expression. I prefer radical humility and radical differentiation. I see opportunities in making lives easier and better, but it is never a problem solved by technology alone. The entanglement between humans and machines is growing and we are more and more ‘teaming up’ with AI. Used with some grit and not over-trusting, this is great. However, we will presumably also deal with many unforeseeable tensions defining the boundaries of human and machine-agency.
As researchers, designers, and regulators, we increasingly face the challenge of defining boundaries and establishing thresholds to address the tensions between smooth user interfaces and negative effects thereof. It means that we need to set standards and/or rules on how to test AI-systems before letting them out in the wild.
Follow Marisa Tschopp on LinkedIn
If you appreciate the content, consider leaving a like <3
Pure gold!!!