Sciweavers

108 search results - page 1 / 22
» Approximation Via Value Unification
Sort
View
ICML
1999
IEEE
14 years 5 months ago
Approximation Via Value Unification
: Numerical function approximation over a Boolean domain is a classical problem with wide application to data modeling tasks and various forms of learning. A great many function ap...
Paul E. Utgoff, David J. Stracuzzi
JAIR
2000
152views more  JAIR 2000»
13 years 4 months ago
Value-Function Approximations for Partially Observable Markov Decision Processes
Partially observable Markov decision processes (POMDPs) provide an elegant mathematical framework for modeling complex decision and planning problems in stochastic domains in whic...
Milos Hauskrecht
ALIFE
2006
13 years 4 months ago
Axiomatic Scalable Neurocontroller Analysis via the Shapley Value
One of the major challenges in the field of neurally driven evolved autonomous agents is deciphering the neural mechanisms underlying their behavior. Aiming at this goal, we have d...
Alon Keinan, Ben Sandbank, Claus C. Hilgetag, Isaa...
DIALM
2008
ACM
139views Algorithms» more  DIALM 2008»
13 years 6 months ago
Approximating maximum integral flows in wireless sensor networks via weighted-degree constrained k-flows
We consider the Maximum Integral Flow with Energy Constraints problem: given a directed graph G = (V, E) with edge-weights {w(e) : e E} and node battery capacities {b(v) : v V }...
Zeev Nutov
ATAL
2005
Springer
13 years 10 months ago
Improving reinforcement learning function approximators via neuroevolution
Reinforcement learning problems are commonly tackled with temporal difference methods, which use dynamic programming and statistical sampling to estimate the long-term value of ta...
Shimon Whiteson