Sciweavers

104 search results - page 1 / 21
» Multimodal Interfaces That Process What Comes Naturally
Sort
View
CACM
2000
44views more  CACM 2000»
13 years 4 months ago
Multimodal Interfaces That Process What Comes Naturally
Sharon L. Oviatt, Philip R. Cohen
AIHC
2007
Springer
13 years 11 months ago
Gaze-X: Adaptive, Affective, Multimodal Interface for Single-User Office Scenarios
This paper describes an intelligent system that we developed to support affective multimodal human-computer interaction (AMM-HCI) where the user’s actions and emotions are modele...
Ludo Maat, Maja Pantic
EACL
2006
ACL Anthology
13 years 6 months ago
What's There to Talk About? A Multi-Modal Model of Referring Behavior in the Presence of Shared Visual Information
This paper describes the development of a rule-based computational model that describes how a feature-based representation of shared visual information combines with linguistic cu...
Darren Gergle
ICMI
2004
Springer
116views Biometrics» more  ICMI 2004»
13 years 10 months ago
Towards integrated microplanning of language and iconic gesture for multimodal output
When talking about spatial domains, humans frequently accompany their explanations with iconic gestures to depict what they are referring to. For example, when giving directions, ...
Stefan Kopp, Paul Tepper, Justine Cassell
SIGIR
1999
ACM
13 years 9 months ago
SCAN: Designing and Evaluating User Interfaces to Support Retrieval From Speech Archives
Previous examinations of search in textual archives have assumed that users first retrieve a ranked set of documents relevant to their query, and then visually scan through these ...
Steve Whittaker, Julia Hirschberg, John Choi, Dona...