We introduce a method for predicting a control signal from another related signal, and apply it to voice puppetry: Generating full facial animation from expressive information in ...
This paper presents a new method named text to visual synthesis with appearance models (TEVISAM) for generating videorealistic talking heads. In a first step, the system learns a ...
We present a computational framework capable of labeling the effort of an action corresponding to the perceived level of exertion by the performer (low ? high). The approach initi...
We present a system for realistic facial animation that decomposes facial motion capture data into semantically meaningful motion channels based on the Facial Action Coding System...
We present a Dynamic Data Driven Application System (DDDAS) to track 2D shapes across large pose variations by learning non-linear shape manifold as overlapping, piecewise linear s...