Sciweavers

551 search results - page 70 / 111
» Multimodal Speech Synthesis
Sort
View
ICCAD
2007
IEEE
151views Hardware» more  ICCAD 2007»
15 years 1 months ago
A design flow dedicated to multi-mode architectures for DSP applications
This paper addresses the design of multi-mode architectures for digital signal processing applications. We present a dedicated design flow and its associated high-level synthesis t...
Cyrille Chavet, Caaliph Andriamisaina, Philippe Co...
AIHC
2007
Springer
15 years 4 months ago
Gaze-X: Adaptive, Affective, Multimodal Interface for Single-User Office Scenarios
This paper describes an intelligent system that we developed to support affective multimodal human-computer interaction (AMM-HCI) where the user’s actions and emotions are modele...
Ludo Maat, Maja Pantic
AIHC
2007
Springer
15 years 4 months ago
SmartWeb Handheld - Multimodal Interaction with Ontological Knowledge Bases and Semantic Web Services
SMARTWEB aims to provide intuitive multimodal access to a rich selection of Web-based information services. We report on the current prototype with a smartphone client interface t...
Daniel Sonntag, Ralf Engel, Gerd Herzog, Alexander...
MM
2005
ACM
130views Multimedia» more  MM 2005»
15 years 3 months ago
Multimodal expressive embodied conversational agents
In this paper we present our work toward the creation of a multimodal expressive Embodied Conversational Agent (ECA). Our agent, called Greta, exhibits nonverbal behaviors synchro...
Catherine Pelachaud
COLING
2010
14 years 4 months ago
Latent Mixture of Discriminative Experts for Multimodal Prediction Modeling
During face-to-face conversation, people naturally integrate speech, gestures and higher level language interpretations to predict the right time to start talking or to give backc...
Derya Ozkan, Kenji Sagae, Louis-Philippe Morency