Abstract
This article presents a class of approximation algorithms that extend the idea of bounded-complexity
inference, inspired by successful constraint propagation algorithms, to probabilistic inference
and combinatorial optimization. The idea is to bound the dimensionality of dependencies
created by inference algorithms. This yields a parameterized scheme, called mini-buckets, that offers
adjustable trade-off between accuracy and efficiency. The mini-bucket approach to optimization problems,
such as finding the most probable explanation (MPE) in Bayesian networks, generates both an
approximate solution and bounds on the solution quality.We present empirical results demonstrating
successful performance of the proposed approximation scheme for the MPE task, both on randomly
generated problems and on realistic domains such as medical diagnosis and probabilistic decoding.
[ps]
[pdf]
|