Sciweavers

395 search results - page 1 / 79
» When do we interact multimodally
Sort
View
ICMI
2004
Springer
162views Biometrics» more  ICMI 2004»
13 years 9 months ago
When do we interact multimodally?: cognitive load and multimodal communication patterns
Mobile usage patterns often entail high and fluctuating levels of difficulty as well as dual tasking. One major theme explored in this research is whether a flexible multimodal in...
Sharon L. Oviatt, Rachel Coulston, Rebecca Lunsfor...
CHI
2006
ACM
14 years 4 months ago
How much do we understand when skim reading?
Geoffrey B. Duggan, Stephen J. Payne
PUC
2006
158views more  PUC 2006»
13 years 4 months ago
Can we do without GUIs? Gesture and speech interaction with a patient information system
We have developed a gesture input system that provides a common interaction technique across mobile, wearable and ubiquitous computing devices of diverse form factors. In this pap...
Eamonn O'Neill, Manasawee Kaenampornpan, Vassilis ...
ICASSP
2011
IEEE
12 years 8 months ago
Continuous F0 in the source-excitation generation for HMM-based TTS: Do we need voiced/unvoiced classification?
Most HMM-based TTS systems use a hard voiced/unvoiced classification to produce a discontinuous F0 signal which is used for the generation of the source-excitation. When a mixed ...
Javier Latorre, Mark J. F. Gales, Sabine Buchholz,...
CHI
2008
ACM
14 years 4 months ago
What to do when search fails: finding information by association
Sometimes people cannot remember the names or locations of things on their computer, but they can remember what other things are associated with them. We created Feldspar, the fir...
Duen Horng Chau, Brad A. Myers, Andrew Faulring