Sciweavers

262 search results - page 12 / 53
» The Integrality of Speech in Multimodal Interfaces
Sort
View
AVI
2006
15 years 3 months ago
Enabling interaction with single user applications through speech and gestures on a multi-user tabletop
Co-located collaborators often work over physical tabletops with rich geospatial information. Previous research shows that people use gestures and speech as they interact with art...
Edward Tse, Chia Shen, Saul Greenberg, Clifton For...
ACMDIS
2008
ACM
15 years 3 months ago
Exploring true multi-user multimodal interaction over a digital table
True multi-user, multimodal interaction over a digital table lets co-located people simultaneously gesture and speak commands to control an application. We explore this design spa...
Edward Tse, Saul Greenberg, Chia Shen, Clifton For...
ICMCS
2000
IEEE
145views Multimedia» more  ICMCS 2000»
15 years 6 months ago
Talking Heads and Synthetic Speech: An Architecture for Supporting Electronic Commerce
Facial animation has been combined with text-to-speech synthesis to create innovative multimodal interfaces. In this paper, we present an architecture for this multimodal interfac...
Jörn Ostermann, David R. Millen
108
Voted
ICCS
2004
Springer
15 years 7 months ago
Collaborative Integration of Speech and 3D Gesture for Map-Based Applications
QuickSet [6] is a multimodal system that gives users the capability to create and control map-based collaborative interactive simulations by supporting the simultaneous input from ...
Andrea Corradini
97
Voted
CHI
2010
ACM
15 years 8 months ago
Speech dasher: fast writing using speech and gaze
Speech Dasher allows writing using a combination of speech and a zooming interface. Users first speak what they want to write and then they navigate through the space of recognit...
Keith Vertanen, David J. C. MacKay