Abstract. In this paper, we report on the infrastructure we have developed to support our research on multimodal cues for understanding meetings. With our focus on multimodality, w...
Lei Chen 0004, R. Rose, Ying Qiao, Irene Kimbara, ...
Abstract. We aim to create a model of emotional reactive virtual humans. This model will help to define realistic behavior for virtual characters based on emotions and events in t...
In this paper, we present a multimodal discourse ontology that serves as a knowledge representation and annotation framework for the discourse understanding component of an artifi...
Whilst there has been substantial research into technology to support meetings, there has been relatively little study of how meeting participants currently make records and how th...
This paper presents an integrated rule-based data mining system that is capable of creating rulebased classifiers with web-based user interface from data sets provided by end user...