Sciweavers

AAAI
2015

Exploiting Determinism to Scale Relational Inference

8 years 1 months ago
Exploiting Determinism to Scale Relational Inference
One key challenge in statistical relational learning (SRL) is scalable inference. Unfortunately, most realworld problems in SRL have expressive models that translate into large grounded networks, representing a bottleneck for any inference method and weakening its scalability. In this paper we introduce Preference Relaxation (PR), a two-stage strategy that uses the determinism present in the underlying model to improve the scalability of relational inference. The basic idea of PR is that if the underlying model involves mandatory (i.e. hard) constraints as well as preferences (i.e. soft constraints) then it is potentially wasteful to allocate memory for all constraints in advance when performing inference. To avoid this, PR starts by relaxing preferences and performing inference with hard constraints only. It then removes variables that violate hard constraints, thereby avoiding irrelevant computations involving preferences. In addition it uses the removed variables to enlarge the evi...
Mohamed Hamza Ibrahim, Christopher J. Pal, Gilles
Added 27 Mar 2016
Updated 27 Mar 2016
Type Journal
Year 2015
Where AAAI
Authors Mohamed Hamza Ibrahim, Christopher J. Pal, Gilles Pesant
Comments (0)