Inaugural lecture prof. dr. Nava Tintarev

Should you trust a machine - and why?

Artificial Intelligence (AI): is it an amazing technology that we need to implement absolutely everywhere, or a boogeyman that could spell the end of humanity? According to professor Nava Tintarev, the truth lies in the middle. “AI is a powerful tool that we can use responsibly if we know its shortcomings.” Knowing shortcomings is tricky, however, as AI models continue to elude human understanding. As professor of Explainable Artificial Intelligence, Tintarev is working to change that paradigm.

Imagine that your team is hiring for a new position. Your co-worker insists that your favorite applicant would be a bad hire – but they can’t explain themselves very well. Without being able to assess their reasons, your decision to ignore or accept their judgement comes down to one thing: trust.

If your co-worker were a human being, you would probably rely on a mixture of gut feeling and experience to determine if you should trust them. But how can you figure out if their advice aligns with your own needs? Should you even consider their recommendations? And perhaps most importantly: what if your 'co-worker' is an artificially intelligent machine?

Reality stranger than fiction

This scenario may seem like science fiction, but it’s closer to reality than most of us realize. Although computer algorithms recommend things to us on a daily basis, like search results on Google or posts on your social media feed, only a few of them can explain their suggestions in a helpful way. And that’s a problem. Such systems have shortcomings hiding under the surface, says professor Nava Tintarev. Furthermore, these systems are becoming increasingly prevalent.

“The more I look, the more I see parts of our lives that are affected by AI decision-making. It’s in predictive policing, in journalism, in human resources – in all sorts of things that are critical for our day-to-day”, she says. “We need to be able to assess the output of AI systems. If a friend gives you advice, you think about their knowledge, the experiences they have had, and then adjust whatever they told you accordingly. Using AI systems responsibly is the same way: sometimes it means ignoring their advice, because you think it’s wrong.” However, that does require some understanding of what’s going on. As professor of Explainable Artificial Intelligence, Tintarev therefore specializes in getting advice-giving algorithms to open up to their human users.

The double whammy of the child benefits scandal

The child benefits scandal – which saw thousands of citizens incorrectly treated as fraudsters because of an algorithm used by the Dutch tax office – is an extreme case where explanations may have helped. “Nobody knew which information was being used to decide whether someone committed fraud or not. The system wasn’t transparent. On top of that, these models didn’t offer any control to users: there was no way of adjusting the information the system was using, for example.”

In the lab, Tintarev and her colleagues build predictive AI models and study how different kinds of interfaces, explanations, and interactions can improve people’s understanding of what is happening. She stresses that simply having an explanation is neither sufficient, nor a goal in itself: “Yes, we really need to make AI human understandable, but that’s the starting point. From there, we need to discuss what we are using the understanding for. Is it to provoke trust? There are explanation styles that are generally more persuasive than others, but they also persuade when the AI system is wrong. Depending on which ‘why’ you have in mind, you generate very different explanations. Tailoring explanations to the situation and the person looking at them is key.”

 

"I see the societal risks of advice-giving AI systems. Working in recommender systems, I could be contributing to the causes – but I’d rather be contributing to mitigation."

Made in Europe

Incidents such as the benefits scandal have helped put human-centred AI – an ethical, trustworthy kind of artificial intelligence for the benefit of people – squarely on the research agenda. It’s the brand of AI that Europe pursues specifically. Tintarev is happy to see the shift happening, but this change is not yet commonplace.

“As computer scientists, we typically talk about performance and decreasing errors when we discuss ‘good’ models. This concerns me, because for the most part, that means we’re not thinking about the context in which an AI system is used. For example, what is the impact of labeling people as fraudulent or not fraudulent, instead of using a scale from more to less risky? We need to reflect on our choices more. I try to incorporate this in my courses. In my master’s course on Explainable Artificial Intelligence, I challenge students to write an episode of the popular tv series Black Mirror.  It’s an exercise that was designed at the Mozilla Festival and is free to use for all educators. Students take an AI technology and imagine how it can evolve in the worst possible way. We then discuss what we, as computer scientists, can do to prevent that worst-case scenario from becoming reality – or how to reach an even more utopian solution.”

Realising human-centred artificial intelligence requires cross-discipline collaborations, which is also reflected in the Dutch research agenda. The recently funded ROBUST programme, a ten-year AI research programme with a total budget of €87 million, for example reserves 20% of its positions for researchers from the social sciences and humanities. Both UM’s Faculty of Science and Engineering (FSE) and the Faculty of Arts and Social Sciences (FASoS) collaborate in ROBUST.

Future perfect

But enough about the shortcomings of computers – humans aren’t exactly perfect decision-makers either. “We’ve been talking about the errors that computers and systems make, but who else has biases? We do! My current research is working towards a future where explanation interfaces can make us aware of some of our own reasoning biases too. By virtue of being human, there is not a single one of us that is free of them.”

A future where we can better judge AI’s decisions, but also our own? In a world increasingly fueled by algorithmic decision-making, misinformation and manipulation that sounds ideal, but also like a hell of a lot of work. “Yeah,” Tintarev laughs, “I’ve got my work cut out for me!”

Prof. Nava Tintarev will hold her inaugural lecture, “Whom are you Explaining to, and Why?” on 3 March 2023 at 16:30 CET.

Watch the live stream

Biography

Prof. dr. Nava Tintarev leads and contributes to several national and international projects in the field of human-computer interaction in artificial advice-giving systems, specifically developing the state-of-the-art for automatically generated explanations and explanation interfaces. Among projects funded by IBM, Twitter and the European Commission, she is a co-Investigator and chair of the Social Sciences and Humanities Committee in ROBUST, a ten-year Artificial Intelligence programme funded by the Dutch government.

In addition, Prof. Tintarev regularly shapes international research programmes and organizes workshops relating to responsible data science. ACM, the Association for Computing Machinery, recognized her as senior member in 2020.

Prof. Tintarev was appointed Full Professor and Chair of Explainable Artificial Intelligence at UM’s Faculty of Science and Engineering on October 1, 2020. She is embedded in FSE’s Department of Advanced Computing Sciences and a Visiting Professor in the Software Technology department at TU Delft.