Sciweavers

360 search results - page 6 / 72
» Combining Learned Discrete and Continuous Action Models
Sort
View
IROS
2007
IEEE
157views Robotics» more  IROS 2007»
15 years 3 months ago
Autonomous blimp control using model-free reinforcement learning in a continuous state and action space
— In this paper, we present an approach that applies the reinforcement learning principle to the problem of learning height control policies for aerial blimps. In contrast to pre...
Axel Rottmann, Christian Plagemann, Peter Hilgers,...
ICRA
2007
IEEE
189views Robotics» more  ICRA 2007»
15 years 3 months ago
Context Estimation and Learning Control through Latent Variable Extraction: From discrete to continuous contexts
— Recent advances in machine learning and adaptive motor control have enabled efficient techniques for online learning of stationary plant dynamics and it’s use for robust pre...
Georgios Petkos, Sethu Vijayakumar
UAI
2004
14 years 10 months ago
Solving Factored MDPs with Continuous and Discrete Variables
Although many real-world stochastic planning problems are more naturally formulated by hybrid models with both discrete and continuous variables, current state-of-the-art methods ...
Carlos Guestrin, Milos Hauskrecht, Branislav Kveto...
79
Voted
ICML
2006
IEEE
15 years 10 months ago
Probabilistic inference for solving discrete and continuous state Markov Decision Processes
Inference in Markov Decision Processes has recently received interest as a means to infer goals of an observed action, policy recognition, and also as a tool to compute policies. ...
Marc Toussaint, Amos J. Storkey
ICIP
2001
IEEE
15 years 11 months ago
A comparison of discrete and continuous output modeling techniques for a pseudo-2D hidden Markov model face recognition system
Face recognition has become an important topic within the field of pattern recognition and computer vision. In this field a number of different approaches to feature extraction, m...
Frank Wallhoff, Stefan Eickeler, Gerhard Rigoll