Sciweavers

MATES
2010
Springer

Do You Get It? User-Evaluated Explainable BDI Agents

13 years 2 months ago
Do You Get It? User-Evaluated Explainable BDI Agents
Abstract. In this paper we focus on explaining to humans the behavior of autonomous agents, i.e., explainable agents. Explainable agents are useful for many reasons including scenario-based training (e.g. disaster training), tutor and pedagogical systems, agent development and debugging, gaming, and interactive storytelling. As the aim is to generate for humans plausible and insightful explanations, user evaluation of different explanations is essential. In this paper we test the hypothesis that different explanation types are needed to explain different types of actions. We present three different, generically applicable, algorithms that automatically generate different types of explanations for actions of BDIbased agents. Quantitative analysis of a user experiment (n=30), in which users rated the usefulness and naturalness of each explanation type for different agent actions, supports our hypothesis. In addition, we present feedback from the users about how they would explain the act...
Joost Broekens, Maaike Harbers, Koen V. Hindriks,
Added 29 Jan 2011
Updated 29 Jan 2011
Type Journal
Year 2010
Where MATES
Authors Joost Broekens, Maaike Harbers, Koen V. Hindriks, Karel van den Bosch, Catholijn M. Jonker, John-Jules Ch. Meyer
Comments (0)