Sciweavers

ILP
2004
Springer

Learning an Approximation to Inductive Logic Programming Clause Evaluation

13 years 10 months ago
Learning an Approximation to Inductive Logic Programming Clause Evaluation
One challenge faced by many Inductive Logic Programming (ILP) systems is poor scalability to problems with large search spaces and many examples. Randomized search methods such as stochastic clause selection (SCS) and rapid random restarts (RRR) have proven somewhat successful at addressing this weakness. However, on datasets where hypothesis evaluation is computationally expensive, even these algorithms may take unreasonably long to discover a good solution. We attempt to improve the performance of these algorithms on datasets by learning an approximation to ILP hypothesis evaluation. We generate a small set of hypotheses, uniformly sampled from the space of candidate hypotheses, and evaluate this set on actual data. These hypotheses and their corresponding evaluation scores serve as training data for learning an approximate hypothesis evaluator. We outline three techniques that make use of the trained evaluation-function approximator in order to reduce the computation required during...
Frank DiMaio, Jude W. Shavlik
Added 02 Jul 2010
Updated 02 Jul 2010
Type Conference
Year 2004
Where ILP
Authors Frank DiMaio, Jude W. Shavlik
Comments (0)