Sciweavers

IAT
2010
IEEE

Design and Evaluation of Explainable BDI Agents

13 years 1 months ago
Design and Evaluation of Explainable BDI Agents
It is widely acknowledged that providing explanations is an important capability of intelligent systems. Explanation capabilities are useful, for example, in scenario-based training systems with intelligent virtual agents. Trainees learn more from scenario-based training when they understand why the virtual agents act the way they do. In this paper, we present a model for explainable BDI agents which enables the explanation of BDI agent behavior in terms of underlying beliefs and goals. Different explanation algorithms can be specified in the model, generating different types of explanations. In a user study (n=20), we compare four explanation algorithms by asking trainees which explanations they consider most useful. Based on the results, we discuss which explanation types should be given under what conditions.
Maaike Harbers, Karel van den Bosch, John-Jules Ch
Added 11 Feb 2011
Updated 11 Feb 2011
Type Journal
Year 2010
Where IAT
Authors Maaike Harbers, Karel van den Bosch, John-Jules Ch. Meyer
Comments (0)