Sciweavers

129 search results - page 14 / 26
» Learning action models from plan examples using weighted MAX...
Sort
View
110
Voted
ECML
2005
Springer
15 years 5 months ago
Using Rewards for Belief State Updates in Partially Observable Markov Decision Processes
Partially Observable Markov Decision Processes (POMDP) provide a standard framework for sequential decision making in stochastic environments. In this setting, an agent takes actio...
Masoumeh T. Izadi, Doina Precup
100
Voted
ICRA
2005
IEEE
116views Robotics» more  ICRA 2005»
15 years 5 months ago
Using Hierarchical EM to Extract Planes from 3D Range Scans
— Recently, the acquisition of three-dimensional maps has become more and more popular. This is motivated by the fact that robots act in the three-dimensional world and several t...
Rudolph Triebel, Wolfram Burgard, Frank Dellaert
113
Voted
KDD
2003
ACM
148views Data Mining» more  KDD 2003»
16 years 2 days ago
Mining concept-drifting data streams using ensemble classifiers
Recently, mining data streams with concept drifts for actionable insights has become an important and challenging task for a wide range of applications including credit card fraud...
Haixun Wang, Wei Fan, Philip S. Yu, Jiawei Han
PRICAI
2000
Springer
15 years 3 months ago
Generating Hierarchical Structure in Reinforcement Learning from State Variables
This paper presents the CQ algorithm which decomposes and solves a Markov Decision Process (MDP) by automatically generating a hierarchy of smaller MDPs using state variables. The ...
Bernhard Hengst
AAAI
2006
15 years 1 months ago
Preference Elicitation and Generalized Additive Utility
Any automated decision support software must tailor its actions or recommendations to the preferences of different users. Thus it requires some representation of user preferences ...
Darius Braziunas, Craig Boutilier