Sciweavers

82 search results - page 13 / 17
» Interpreting Hand-Over-Face Gestures
Sort
View
106
Voted
IUI
2000
ACM
15 years 6 months ago
Expression constraints in multimodal human-computer interaction
Thanks to recent scientific advances, it is now possible to design multimodal interfaces allowing the use of speech and pointing out gestures on a touchscreen. However, present sp...
Sandrine Robbe-Reiter, Noelle Carbonell, Pierre Da...
VLC
2010
91views more  VLC 2010»
15 years 4 days ago
A model-based recognition engine for sketched diagrams
Many of today’s recognition approaches for hand-drawn sketches are feature-based, which is conceptually similar to the recognition of hand-written text. While very suitable for ...
Florian Brieler, Mark Minas
UIST
2010
ACM
14 years 11 months ago
A framework for robust and flexible handling of inputs with uncertainty
New input technologies (such as touch), recognition based input (such as pen gestures) and next-generation interactions (such as inexact interaction) all hold the promise of more ...
Julia Schwarz, Scott E. Hudson, Jennifer Mankoff, ...
112
Voted
IUI
2010
ACM
15 years 10 months ago
Usage patterns and latent semantic analyses for task goal inference of multimodal user interactions
This paper describes our work in usage pattern analysis and development of a latent semantic analysis framework for interpreting multimodal user input consisting speech and pen ge...
Pui-Yu Hui, Wai Kit Lo, Helen M. Meng
CHI
2010
ACM
15 years 8 months ago
Where are you pointing?: the accuracy of deictic pointing in CVEs
Deictic reference – pointing at things during conversation – is ubiquitous in human communication, and should also be an important tool in distributed collaborative virtual en...
Nelson Wong, Carl Gutwin