[an error occurred while processing this directive]

Publications & Technical Reports
|
R74
Evaluating Constraint Processing Algorithms
Rina Dechter (dechter@ics.uci.edu) & Daniel Frost (frost@ics.uci.edu)

Abstract
Since for most artificial intelligence problems worstcase analysis does not necessarily reflect actual performance and since informative performance guarantees are not always available, empirical evaluation of algorithms is necessary. To do that we need to address the question of distributions, and benchmarks. Based on our study of CSP algorithms we propose the use of multiple types of benchmarks and multiple forms of presenting the results. The benchmarks should include: 1. Individual problem instances representing domains of interest, 2. Parameterized random problems, 3. Application-based parameterized random problems. Results should be presented using 1. Average and variances of the data, 2. frequency and distribution graphs, 3. scatter diagrams.

The target is to identify a small number of algorithms (not one) that are dominating, namely proved superior on some class of problems. For dominating algorithms we wish to identify problem characteristics on which they are likely to be good.


  [ps] [pdf]