Sciweavers

81 search results - page 7 / 17
» Optimization of Chemical Processes Under Uncertainty
Sort
View
AAAI
1996
14 years 11 months ago
Computing Optimal Policies for Partially Observable Decision Processes Using Compact Representations
: Partially-observable Markov decision processes provide a very general model for decision-theoretic planning problems, allowing the trade-offs between various courses of actions t...
Craig Boutilier, David Poole
AUTOMATICA
2006
152views more  AUTOMATICA 2006»
14 years 10 months ago
Simulation-based optimization of process control policies for inventory management in supply chains
A simulation-based optimization framework involving simultaneous perturbation stochastic approximation (SPSA) is presented as a means for optimally specifying parameters of intern...
Jay D. Schwartz, Wenlin Wang, Daniel E. Rivera
IROS
2006
IEEE
88views Robotics» more  IROS 2006»
15 years 3 months ago
Reliability-Based Design Optimization of Robotic System Dynamic Performance
In this investigation a robotic system’s dynamic performance is optimized for high reliability under uncertainty. The dynamic capability equations allow designers to predict the...
Alan P. Bowling, John E. Renaud, Jeremy T. Newkirk...
ICTAI
2000
IEEE
15 years 1 months ago
Building efficient partial plans using Markov decision processes
Markov Decision Processes (MDP) have been widely used as a framework for planning under uncertainty. They allow to compute optimal sequences of actions in order to achieve a given...
Pierre Laroche
ECAI
2010
Springer
14 years 11 months ago
On Finding Compromise Solutions in Multiobjective Markov Decision Processes
A Markov Decision Process (MDP) is a general model for solving planning problems under uncertainty. It has been extended to multiobjective MDP to address multicriteria or multiagen...
Patrice Perny, Paul Weng