Sciweavers

54 search results - page 7 / 11
» Modeling and Synthesis of Facial Motion Driven by Speech
Sort
View
CA
2003
IEEE
15 years 3 months ago
Language-Driven Nonverbal Communication in a Bilingual Conversational Agent
This paper describes an animated conversational agent called Kare1 which integrates a talking head interface with a linguistically motivated human-machine dialogue system. The age...
Scott A. King, Alistair Knott, Brendan McCane
NIPS
2007
14 years 11 months ago
A probabilistic model for generating realistic lip movements from speech
The present work aims to model the correspondence between facial motion and speech. The face and sound are modelled separately, with phonemes being the link between both. We propo...
Gwenn Englebienne, Tim Cootes, Magnus Rattray
77
Voted
CVPR
2004
IEEE
16 years 8 days ago
Asymmetrically Boosted HMM for Speech Reading
Speech reading, also known as lip reading, is aimed at extracting visual cues of lip and facial movements to aid in recognition of speech. The main hurdle for speech reading is th...
Pei Yin, Irfan A. Essa, James M. Rehg
95
Voted
IBPRIA
2005
Springer
15 years 3 months ago
Performance Driven Facial Animation by Appearance Based Tracking
We present a method that estimates high level animation parameters (muscle contractions, eye movements, eye lids opening, jaw motion and lips contractions) from a marker-less face ...
José Miguel Buenaposada, Enrique Muñ...
86
Voted
GRAPHITE
2004
ACM
15 years 3 months ago
Statistical synthesis of facial expressions for the portrayal of emotion
This paper presents a novel technique for the generation of ‘video textures’ to display human emotion. This is achieved by a method which uses existing video footage to synthe...
Lisa Gralewski, Neill W. Campbell, Barry T. Thomas...