Sciweavers

9 search results - page 2 / 2
» EmoEmma: emotional speech input for interactive storytelling
Sort
View
AIHC
2007
Springer
13 years 11 months ago
Affect Detection and an Automated Improvisational AI Actor in E-Drama
Enabling machines to understand emotions and feelings of the human users in their natural language textual input during interaction is a challenging issue in Human Computing. Our w...
Li Zhang, Marco Gillies, John A. Barnden, Robert J...
ROMAN
2007
IEEE
149views Robotics» more  ROMAN 2007»
13 years 11 months ago
Fritz - A Humanoid Communication Robot
Abstract—In this paper, we present the humanoid communication robot Fritz. Our robot communicates with people in an intuitive, multimodal way. Fritz uses speech, facial expressio...
Maren Bennewitz, Felix Faber, Dominik Joho, Sven B...
AIHC
2007
Springer
13 years 11 months ago
Modeling Naturalistic Affective States Via Facial, Vocal, and Bodily Expressions Recognition
Affective and human-centered computing have attracted a lot of attention during the past years, mainly due to the abundance of devices and environments able to exploit multimodal i...
Kostas Karpouzis, George Caridakis, Loïc Kess...
ICMI
2004
Springer
148views Biometrics» more  ICMI 2004»
13 years 10 months ago
A framework for evaluating multimodal integration by humans and a role for embodied conversational agents
One of the implicit assumptions of multi-modal interfaces is that human-computer interaction is significantly facilitated by providing multiple input and output modalities. Surpri...
Dominic W. Massaro