How to bypass the NP-hardness of estimating the best explanation of given data (instantiated as MAP, i.e., Maximum A Posteriori, not as maximum likelihood) in discrete Bayesian Networks, through distinction of relevant and irrelevant variables

Johan Kwisthout, Most frugal explanations in Bayesian networks, Artificial Intelligence, Volume 218, January 2015, Pages 56-73, ISSN 0004-3702, DOI: 10.1016/j.artint.2014.10.001

Inferring the most probable explanation to a set of variables, given a partial observation of the remaining variables, is one of the canonical computational problems in Bayesian networks, with widespread applications in AI and beyond. This problem, known as MAP, is computationally intractable (NP-hard) and remains so even when only an approximate solution is sought. We propose a heuristic formulation of the MAP problem, denoted as Inference to the Most Frugal Explanation (MFE), based on the observation that many intermediate variables (that are neither observed nor to be explained) are irrelevant with respect to the outcome of the explanatory process. An explanation based on few samples (often even a singleton sample) from these irrelevant variables is typically almost as good as an explanation based on (the computationally costly) marginalization over these variables. We show that while MFE is computationally intractable in general (as is MAP), it can be tractably approximated under plausible situational constraints, and its inferences are fairly robust with respect to which intermediate variables are considered to be relevant.

Comments are closed.

Post Navigation