Marginal Beliefs of Random MRFs

A pairwise Markov Random Field is a way of defining a probability distribution over some vector {\bf x}. One way to write one is

p({\bf x}) \propto \exp( \sum_i \phi(x_i) + \sum_{(i,j)} \psi(x_i,x_j) ).

Where the first sum is over all the variables, and the second sum is over neighboring pairs. Here, I generated some random distributions over binary valued variables. For each i, I set \phi(x_i=0)=0, and \phi(x_i=1)=r_i where r_i is some value randomly chosen from a standard Gaussian. For the pairwise terms, I used \psi(x_i,x_j) = .75 \cdot I(x_i=x_j). (i.e. \psi(x_i,x_j) is .75 when the arguments are the same, and zero otherwise.) This is an “attractive network”, where neighboring variables want to have the same value.

Computing marginals p(x_i) is hard in graphs that are not treelike. Here, I approximate them using a nonlinear minimization of a “free energy” similar to that used in loopy belief propagation.

Here, I show the random single-variate biases r_i and the resulting beliefs.  What we see is constant valued regions (encouraged by \psi) interrupted where the $\phi$ is very strong.

Now, with more variables.

Now, a “repellent” network. I repeated the procedure above, but changed the pairwise interactions to \psi(x_i,x_j) = -.75 \cdot I(x_i\not=x_j). Neighboring variables want to have different values.  Notice this is the opposite of the above behavior– regions of “checkerboard” interrupted where the $\phi$ outvotes \psi.

Now, the repellent network with more variables.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s