Context aware, multimodal, and semantic rendering engine

11 years 8 months ago
Context aware, multimodal, and semantic rendering engine
Nowadays, several techniques exist to render digital content such as graphics, audio, haptic, etc. Unfortunately, they require different faculties that cannot always be applied, e.g. providing a picture to a blind person would be useless. In this paper, we present a new multimodal rendering engine with a server web-connected to other devices to perform ubiquitous computing. In order to take advantage of user capabilities, we defined an ontology populated with the following elements: user, device, and information. Our system, with the help of this ontology, aims to select and launch automatically a rendering application. Several test case applications were implemented to render shape, text, and video information via audio, haptic, and sight channels. Validations demonstrate that our system is flexible, easily extensible, and shows promise. CR Categories: H.5.2 [Information interfaces and presentation]: User interfaces—evaluation/methodology; Input devices and strategies; Interactio...
Patrick Salamin, Daniel Thalmann, Fréd&eacu
Added 28 May 2010
Updated 28 May 2010
Type Conference
Year 2009
Authors Patrick Salamin, Daniel Thalmann, Frédéric Vexo
Comments (0)