Sciweavers

1084 search results - page 150 / 217
» Hidden Markov Models with Multiple Observation Processes
Sort
View
AAAI
2000
15 years 5 months ago
Back to the Future for Consistency-Based Trajectory Tracking
Given a model of a physical process and a sequence of commands and observations received over time, the task of an autonomous controller is to determine the likely states of the p...
James Kurien, P. Pandurang Nayak

Publication
1851views
17 years 5 months ago
Cerebrovascular Segmentation from TOF Using Stochastic Models
In this paper, we present an automatic statistical approach for extracting 3D blood vessels from time-of-flight (TOF) magnetic resonance angiography (MRA) data. The voxels of the d...
M. Sabry Hassouna, Aly A. Farag, Stephen Hushek, T...
IAT
2005
IEEE
15 years 9 months ago
Decomposing Large-Scale POMDP Via Belief State Analysis
Partially observable Markov decision process (POMDP) is commonly used to model a stochastic environment with unobservable states for supporting optimal decision making. Computing ...
Xin Li, William K. Cheung, Jiming Liu
186
Voted
SOCIALCOM
2010
15 years 2 months ago
A Decision Theoretic Approach to Data Leakage Prevention
Abstract--In both the commercial and defense sectors a compelling need is emerging for rapid, yet secure, dissemination of information. In this paper we address the threat of infor...
Janusz Marecki, Mudhakar Srivatsa, Pradeep Varakan...
CSL
2012
Springer
13 years 11 months ago
Reinforcement learning for parameter estimation in statistical spoken dialogue systems
Reinforcement techniques have been successfully used to maximise the expected cumulative reward of statistical dialogue systems. Typically, reinforcement learning is used to estim...
Filip Jurcícek, Blaise Thomson, Steve Young