Approximate & Exact Inference in Bayesian Networks

Nikhil
5 min readJul 4, 2022

--

Introduction

Bayesian networks are a type of probabilistic graphical model which represents a set of variables and their conditional dependencies using a Directed Acyclic Graph (DAG). It uses Bayesian inference for probability computations. Bayesian networks aim to model conditional dependence, and therefore causation, by representing conditional dependence by edges in a directed graph. Through these relationships, one can efficiently conduct inference on the random variables in the graph through the use of factors.

The Bayesian Network

Using the relationships specified by our Bayesian network, we can obtain a compact, factorized representation of the joint probability distribution by taking advantage of conditional independence.

A Bayesian network is a directed acyclic graph in which each edge corresponds to a conditional dependency, and each node corresponds to a unique random variable. Formally, if an edge (A, B) exists in the graph connecting random variables A and B, it means that P(B|A) is a factor in the joint probability distribution, so we must know P(B|A) for all values of B and A in order to conduct inference. In the above example, since Rain has an edge going into WetGrass, it means that P(WetGrass|Rain) will be a factor, whose probability values are specified next to the WetGrass node in a conditional probability table.

Bayesian networks satisfy the local Markov property, which states that a node is conditionally independent of its non-descendants given its parents. In the above example, this means that P(Sprinkler|Cloudy, Rain) = P(Sprinkler|Cloudy) since Sprinkler is conditionally independent of its non-descendant, Rain, given Cloudy. This property allows us to simplify the joint distribution, obtained in the previous section using the chain rule, to a smaller form. After simplification, the joint distribution for a Bayesian network is equal to the product of P(node|parents(node)) for all nodes, stated below:

In larger networks, this property allows us to greatly reduce the amount of required computation, since generally, most nodes will have few parents relative to the overall size of the network.

Inference

Inference is one key objective in a Bayesian network (BN), and it aims to estimate the posterior distributions of state variables based on evidence (observations).

- Exact Inference

Inference over a Bayesian network can come in two forms.

The first is simply evaluating the joint probability of a particular assignment of values for each variable (or a subset) in the network. For this, we already have a factorized form of the joint distribution, so we simply evaluate that product using the provided conditional probabilities. If we only care about a subset of variables, we will need to marginalize out the ones we are not interested in. In many cases, this may result in underflow, so it is common to take the logarithm of that product, which is equivalent to adding up the individual logarithms of each term in the product.

The second, more interesting inference task, is to find P(x|e), or, to find the probability of some assignment of a subset of the variables (x) given assignments of other variables (our evidence, e). In the above example, an example of this could be to find P(Sprinkler, WetGrass | Cloudy), where {Sprinkler, WetGrass} is our x, and {Cloudy} is our e. In order to calculate this, we use the fact that P(x|e) = P(x, e) / P(e) = αP(x, e), where α is a normalization constant that we will calculate at the end such that P(x|e) + P(¬x | e) = 1. In order to calculate P(x, e), we must marginalize the joint probability distribution over the variables that do not appear in x or e, which we will denote as Y.

Note: Note that in larger networks, Y will most likely be quite large, since most inference tasks will only directly use a small subset of the variables. In cases like these, exact inference as shown above is very computationally intensive, so methods must be used to reduce the amount of computation. One more efficient method of exact inference is through variable elimination, which takes advantage of the fact that each factor only involves a small number of variables. This means that the summations can be rearranged such that only factors involving a given variable are used in the marginalization of that variable. Alternatively, many networks are too large even for this method, so approximate inference methods such as MCMC are instead used; these provide probability estimations that require significantly less computation than exact inference methods.

- Approximate Inference

What is approximate inference?

  • It is a method of estimating probabilities in Bayesian networks also called ‘Monte Carlo’ algorithms.
  • We will discuss two types of algorithms: Direct sampling and Markov chain sampling.

Why use approximate inference?

  • Exact inference becomes intractable for large multiply-connected networks
  • Variable elimination can have exponential time and space complexity
  • Exact inference is strictly HARDER than NP-complete problems (#P-hard)

Direct Sampling

In direct sampling we take samples of events. We expect the frequency of the samples to converge on the probability of the event.

Rejection Sampling —

  • Used to compute conditional probabilities P(X|e)
  • Generate samples as before
  • Reject samples that do not match evidence
  • Estimate by counting the how often event X is in the resulting samples

Likelihood Weighting —

  • Avoid inefficiency of rejection sampling
  • Fix values for evidence variables and only sample the remaining variables
  • Weight samples with regard to how likely they are

Markov Chain Sampling

  • Generate events by making a random change to the preceding event
  • This change is made using the Markov Blanket of the variable to be changed
  • Markov Blanket = parents, children, children’s parents
  • Tally and normalize results

--

--

No responses yet