Sciweavers

CHI
2009
ACM

Why and why not explanations improve the intelligibility of context-aware intelligent systems

14 years 4 months ago
Why and why not explanations improve the intelligibility of context-aware intelligent systems
Context-aware intelligent systems employ implicit inputs, and make decisions based on complex rules and machine learning models that are rarely clear to users. Such lack of system intelligibility can lead to loss of user trust, satisfaction and acceptance of these systems. However, automatically providing explanations about a systems decision process can help mitigate this problem. In this paper we present results from a controlled study with over 200 participants in which the effectiveness of different types of explanations was examined. Participants were shown examples of a systems operation along with various automatically generated explanations, and then tested on their understanding of the system. We show, for example, that explanations describing why the system behaved a certain way resulted in better understanding and stronger feelings of trust. Explanations describing why the system did not behave a certain way, resulted in lower understanding yet adequate performance. We disc...
Brian Y. Lim, Anind K. Dey, Daniel Avrahami
Added 24 Nov 2009
Updated 24 Nov 2009
Type Conference
Year 2009
Where CHI
Authors Brian Y. Lim, Anind K. Dey, Daniel Avrahami
Comments (0)