Sciweavers

247 search results - page 49 / 50
» Model-Based Design of Speech Interfaces
Sort
View
ICMI
2004
Springer
215views Biometrics» more  ICMI 2004»
13 years 10 months ago
Bimodal HCI-related affect recognition
Perhaps the most fundamental application of affective computing would be Human-Computer Interaction (HCI) in which the computer is able to detect and track the user’s affective ...
Zhihong Zeng, Jilin Tu, Ming Liu, Tong Zhang, Nich...
IUI
2010
ACM
14 years 2 months ago
Usage patterns and latent semantic analyses for task goal inference of multimodal user interactions
This paper describes our work in usage pattern analysis and development of a latent semantic analysis framework for interpreting multimodal user input consisting speech and pen ge...
Pui-Yu Hui, Wai Kit Lo, Helen M. Meng
VRST
2009
ACM
13 years 11 months ago
Gaze behavior and visual attention model when turning in virtual environments
In this paper we analyze and try to predict the gaze behavior of users navigating in virtual environments. We focus on first-person navigation in virtual environments which invol...
Sébastien Hillaire, Anatole Lécuyer,...
UAIS
2008
172views more  UAIS 2008»
13 years 5 months ago
Recent developments in visual sign language recognition
Abstract Research in the field of sign language recognition has made significant advances in recent years. The present achievements provide the basis for future applications with t...
Ulrich von Agris, Jörg Zieren, Ulrich Canzler...
FCCM
2006
IEEE
133views VLSI» more  FCCM 2006»
13 years 11 months ago
A Scalable FPGA-based Multiprocessor
It has been shown that a small number of FPGAs can significantly accelerate certain computing tasks by up to two or three orders of magnitude. However, particularly intensive lar...
Arun Patel, Christopher A. Madill, Manuel Salda&nt...