Sciweavers

1138 search results - page 37 / 228
» Feature Markov Decision Processes
Sort
View
129
Voted
JAIR
2008
107views more  JAIR 2008»
15 years 20 days ago
Planning with Durative Actions in Stochastic Domains
Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, while an otherwise expressive model, allow only for sequential, non-durative action...
Mausam, Daniel S. Weld
94
Voted
ATAL
2004
Springer
15 years 6 months ago
Interactive POMDPs: Properties and Preliminary Results
This paper presents properties and results of a new framework for sequential decision-making in multiagent settings called interactive partially observable Markov decision process...
Piotr J. Gmytrasiewicz, Prashant Doshi
94
Voted
CORR
2010
Springer
112views Education» more  CORR 2010»
15 years 23 days ago
Efficient Approximation of Optimal Control for Markov Games
The success of probabilistic model checking for discrete-time Markov decision processes and continuous-time Markov chains has led to rich academic and industrial applications. The ...
Markus Rabe, Sven Schewe, Lijun Zhang
102
Voted
ICASSP
2009
IEEE
15 years 7 months ago
Experimenting with a global decision tree for state clustering in automatic speech recognition systems
In modern automatic speech recognition systems, it is standard practice to cluster several logical hidden Markov model states into one physical, clustered state. Typically, the cl...
Jasha Droppo, Alex Acero
101
Voted
ICTAI
1996
IEEE
15 years 4 months ago
Incremental Markov-Model Planning
This paper presents an approach to building plans using partially observable Markov decision processes. The approach begins with a base solution that assumes full observability. T...
Richard Washington