Sciweavers

Share
SACI
2015
IEEE

Multimodal information fusion for human-robot interaction

3 years 10 months ago
Multimodal information fusion for human-robot interaction
—In this paper we introduce a multimodal information fusion for human-robot interaction system These multimodal information consists of combining methods for hand sign recognition and emotion recognition of multiple. These different recognition modalities are an essential way for Human-Robot Interaction (HRI). Sign language is the most intuitive and direct way to communication for impaired or disabled people. Through the hand or body gestures, the disabled can easily let caregiver or robot know what message they want to convey. Emotional interaction with human beings is desirable for robots. In this study, we propose an integrated system which has ability to track multiple people at the same time, to recognize their facial expressions, and to identify social atmosphere. Consequently, robots can easily recognize facial expression, emotion variations of different people, and can respond properly. In this paper, we have developed algorithms to determine the hands sign via a process call...
Ren C. Luo, Y. C. Wu, P. H. Lin
Added 17 Apr 2016
Updated 17 Apr 2016
Type Journal
Year 2015
Where SACI
Authors Ren C. Luo, Y. C. Wu, P. H. Lin
Comments (0)
books