Sciweavers

551 search results - page 67 / 111
» Multimodal Speech Synthesis
Sort
View
CHI
2009
ACM
15 years 10 months ago
A biologically inspired approach to learning multimodal commands and feedback for human-robot interaction
In this paper we describe a method to enable a robot to learn how a user gives commands and feedback to it by speech, prosody and touch. We propose a biologically inspired approac...
Anja Austermann, Seiji Yamada
PDC
2006
ACM
15 years 3 months ago
A participatory design agenda for ubiquitous computing and multimodal interaction: a case study of dental practice
This paper reflects upon our attempts to bring a participatory design approach to design research into interfaces that better support dental practice. The project brought together...
Tim Cederman-Haysom, Margot Brereton
CHI
2005
ACM
15 years 10 months ago
Evaluation of multimodal input for entering mathematical equations on the computer
Current standard interfaces for entering mathematical equations on computers are arguably limited and cumbersome. Mathematics notations have evolved to aid visual thinking and yet...
Lisa Anthony, Jie Yang, Kenneth R. Koedinger
LREC
2010
135views Education» more  LREC 2010»
14 years 11 months ago
The PlayMancer Database: A Multimodal Affect Database in Support of Research and Development Activities in Serious Game Environm
The present paper reports on a recent effort that resulted in the establishment of a unique multimodal affect database, referred to as the PlayMancer database. This database was c...
Theodoros Kostoulas, Otilia Kocsis, Todor Ganchev,...
ROMAN
2007
IEEE
111views Robotics» more  ROMAN 2007»
15 years 4 months ago
Understanding Rules in Human-Robot Instructions
— This paper presents an overview of the systematic creation of a human-robot instruction system from a multi-modal corpus. The corpus has been collected from human-to-human card...
Joerg C. Wolf, Guido Bugmann