Computational Accounts of Trust in Human AI Interaction

By Zahra Zahedi



Download Dissertation



Abstract

The growing presence of AI-driven systems in our daily lives calls for the development of efficient methods to facilitate interactions between humans and AI agents. At the heart of these interactions lies the notion of trust, a key element shaping human behavior and decision-making. It is essential to foster a suitable level of trust to ensure the success of human-AI collaborations, while recognizing that excessive or misplaced trust can lead to unfavorable consequences. Human-AI partnerships face distinct hurdles, particularly potential misunderstandings about AI capabilities. This emphasizes the need for AI agents to better understand and adjust human expectations and trust.
The thesis explores the dynamics of trust in human-robot interactions, acknowledging that the term encompasses human-AI interactions, and emphasizes the importance of understanding trust in these relationships. This thesis first presents a mental model-based framework that contextualizes trust in human-AI interactions, capturing multi-faceted dimensions often overlooked in computational trust studies. Then, we use this framework as a basis for developing decision-making frameworks that incorporate trust in both single and longitudinal human-AI interactions. Finally, this mental model-based framework enables us to infer and estimate trust when direct measures are not feasible.



Thesis Committee