Sciweavers

VW
1998
Springer

Real Face Communication in a Virtual World

13 years 9 months ago
Real Face Communication in a Virtual World
This paper describes an efficient method to make an individual face for animation from several possible inputs and how to use this result for a realistic talking head communication in a virtual world. We present a method to reconstruct 3D facial model from two orthogonal pictures taken from front and side views. The method is based on extracting features from a face in a semiautomatic way and deforming a generic model. Texture mapping based on cylindrical projection is employed using a composed image from the two images. A reconstructed head is animated immediately and is able to talk with given text, which is transformed to corresponding phonemes and visemes. We also propose a system for individualized face-to-face communication through network using MPEG4.
Won-Sook Lee, Elwin Lee, Nadia Magnenat-Thalmann
Added 06 Aug 2010
Updated 06 Aug 2010
Type Conference
Year 1998
Where VW
Authors Won-Sook Lee, Elwin Lee, Nadia Magnenat-Thalmann
Comments (0)