Sciweavers

112 search results - page 9 / 23
» Multimodal interactive machine translation
Sort
View
ICMI
2005
Springer
164views Biometrics» more  ICMI 2005»
15 years 3 months ago
A user interface framework for multimodal VR interactions
This article presents a User Interface (UI) framework for multimodal interactions targeted at immersive virtual environments. Its configurable input and gesture processing compon...
Marc Erich Latoschik
HCI
2009
14 years 7 months ago
Did I Get It Right: Head Gestures Analysis for Human-Machine Interactions
This paper presents a system for another input modality in a multimodal human-machine interaction scenario. In addition to other common input modalities, e.g. speech, we extract he...
Jürgen Gast, Alexander Bannat, Tobias Rehrl, ...
TACAS
2007
Springer
91views Algorithms» more  TACAS 2007»
15 years 3 months ago
Planned and Traversable Play-Out: A Flexible Method for Executing Scenario-Based Programs,
We introduce a novel approach to the smart execution of scenario-based models of reactive systems, such as those resulting from the multi-modal inter-object language of live sequen...
David Harel, Itai Segall
ACL
2006
14 years 11 months ago
An End-to-End Discriminative Approach to Machine Translation
We present a perceptron-style discriminative approach to machine translation in which large feature sets can be exploited. Unlike discriminative reranking approaches, our system c...
Percy Liang, Alexandre Bouchard-Côté,...
CHI
2009
ACM
15 years 10 months ago
A biologically inspired approach to learning multimodal commands and feedback for human-robot interaction
In this paper we describe a method to enable a robot to learn how a user gives commands and feedback to it by speech, prosody and touch. We propose a biologically inspired approac...
Anja Austermann, Seiji Yamada