CRF Toolbox Updated

I updated the code for my Graphical Models / Conditional Random Fields toolbox This is a Matlab toolbox, though almost all the real work is done in compiled C++ for efficiency. The main improvements are:

  • Lots of bugfixes.
  • Various small improvements in speed.
  • A unified CRF training interface to make things easier for those not training on images
  • Binaries are now provided for Linux as well as OS X.
  • The code for inference and learning using TRW is now multithreaded, using openmp.
  • Switched to using a newer version of Eigen

There is also far more detailed examples, including a full tutorial of how to train a CRF to do “semantic segmentation” on the Stanford Backgrounds dataset. Just using simple color, position, and Histogram of Gradient features, the error rates are 23%, which appear to be state of the art (and better than previous CRF based approaches.) It takes about 90 minutes to train on my 8-core machine, and processes new frames in a little over a second each.

For fun, I also ran this model on a video of someone driving from Alexandria into Georgetown. You can see that the results are far from perfect but are reasonably good. (Notice it successfully distinguishes trees and grass at 0:12)

I’m keen to have others use the code (what with the hundreds of hours spent writing it), so please send email if you have any issues.

Advertisements

Personal opinions about graphical models 1: The surrogate likelihood exists and you should use it.

When talking about graphical models with people  (particularly computer vision folks) I find myself advancing a few opinions over and over again.  So, in an effort to stop bothering people at conferences, I thought I’d write a few entries here.

The first thing I’d like to discuss is “surrogate likelihood” training.  (So far as I know, Martin Wainwright was the first person to give a name to this method.)

Background

Suppose we want to fit a Markov random field (MRF).  I’m writing this as a generative model with an MRF for simplicity– pretty much the same story holds with a Conditional Random Field in the discriminative setting.

p({\bf x}) = \frac{1}{Z} \prod_{c} \psi({\bf x}_c) \prod_i \psi(y_i)

Here, the first product is over all cliques/factors in the graph, and the second is over all single variables.  Now, it is convenient to note that MRFs can be seen as members of the exponential family

p({\bf x};{\boldsymbol \theta}) = \exp( {\boldsymbol \theta} \cdot {\bf f}({\bf x}) - A({\boldsymbol \theta}) ),

where

{\bf f}({\bf X})=\{I[{\bf X}_{c}={\bf x}_{c}]|\forall c,{\bf x}_{c}\}\cup\{I[X_{i}=x_{i}]|\forall i,x_{i} \}

is a function consisting of indicator functions for each possible configuration of each clique and variable, and the log-partition function

A(\boldsymbol{\theta})=\log\sum_{{\bf x}}\exp\boldsymbol{\theta}\cdot{\bf f}({\bf x}).

ensures normalization.

Now, the log-partition function has the very important (and easy to show) property that the gradient is the expected value of \bf f.

\displaystyle \frac{dA}{d{\boldsymbol \theta}} = \sum_{\bf x} p({\bf x};{\boldsymbol \theta}) {\bf f}({\bf x})

With a graphical model, what does this mean?  Well, notice that the expected value of, say, I[X_i=x_i] will be exactly p(x_i;{\boldsymbol \theta}). Thus, the expected value of {\bf f} will be a vector containing all univariate and clique-wise marginals.  If we write this as {\boldsymbol \mu}({\boldsymbol \theta}), then we have

\displaystyle \frac{dA}{d{\boldsymbol \theta}} = {\boldsymbol \mu}({\boldsymbol \theta}).

The usual story

Suppose we want to do maximum likelihood learning.  This means we want to set {\boldsymbol \theta} to maximize

L( {\boldsymbol \theta} ) = \frac{1}{N}\sum_{\hat{{\bf x}}}\log p({\bf x};{\boldsymbol \theta})={\boldsymbol \theta}\cdot\frac{1}{N}\sum_{\hat{{\bf x}}}{\bf f}(\hat{{\bf x}})-A({\boldsymbol \theta}).

If we want to use gradient ascent, we would just take a small step along the gradient.  This has a very intuitive form: it is the difference of the expected value of \bf f under the model to the expected value of \bf f under the current distribution.

\displaystyle \frac{dL}{d{\boldsymbol \theta}} = \frac{1}{N}\sum_{\hat{{\bf x}}}{\bf f}(\hat{{\bf x}}) - \sum_{\bf x} p({\bf x};{\boldsymbol \theta}) {\bf f}({\bf x}).

\displaystyle \frac{dL}{d{\boldsymbol \theta}} = \frac{1}{N}\sum_{\hat{{\bf x}}}{\bf f}(\hat{{\bf x}}) - {\boldsymbol \mu}({\boldsymbol \theta}).

Note the lovely property of moment matching here. If we have found a solution, then dL/d{\boldsymbol \theta}=0 and so the expected value of \bf f under the current distribution will be exactly equal to that under the data.

Unfortunately, in a high-treewidth setting, we can’t compute the marginals.  That’s too bad.  However, we have all these lovely approximate inference algorithms (loopy belief propagation, tree-reweighted belief propagation, mean field, etc.).  Suppose we write the resulting approximate marginals as \tilde{{\boldsymbol \mu}}({\boldsymbol \theta}).  Then, instead of taking the above gradient step, why not instead just use

\frac{1}{N}\sum_{\hat{{\bf x}}}{\bf f}(\hat{{\bf x}}) - \tilde{{\boldsymbol \mu}}({\boldsymbol \theta})?

That’s all fine!  However, I often see people say/imply/write some or all of the following:

  1. This is not guaranteed to converge.
  2. There is no longer any well-defined objective function being maximized.
  3. We can’t use line searches.
  4. We have to use (possibly stochastic) gradient ascent.
  5. This whole procedure is frightening and shouldn’t be mentioned in polite company.

I agree that we should view this procedure with some suspicion, but it gets far more than it deserves! The first four points, in my view, are simply wrong.

What’s missing

The critical thing that is missing from the above story is this: Approximate marginals come together with an approximate partition function!

That is, if you are computing approximate marginals \tilde{{\boldsymbol \mu}}({\boldsymbol \theta}) using loopy belief propagation, mean-field, or tree-reweighted belief propagation, there is a well-defined approximate log-partition function \tilde{A}({\boldsymbol \theta}) such that

\displaystyle \tilde{{\boldsymbol \mu}}({\boldsymbol \theta}) = \frac{d\tilde{A}}{d{\boldsymbol \theta}}.

What this means is that you should think, not of approximating the likelihood gradient, but of approximating the likelihood itself. Specifically, what the above is really doing is optimizing the “surrogate likelihood”

\tilde{L}({\boldsymbol \theta}) = {\boldsymbol \theta}\cdot\frac{1}{N}\sum_{\hat{{\bf x}}}{\bf f}(\hat{{\bf x}})-\tilde{A}({\boldsymbol \theta}).

What’s the gradient of this? It is

\frac{1}{N}\sum_{\hat{{\bf x}}}{\bf f}(\hat{{\bf x}}) - \tilde{{\boldsymbol \mu}}({\boldsymbol \theta}),

or exactly the gradient that was being used above. The advantage of doing things this way is that it is a normal optimization.  There is a well-defined objective. It can be plugged into a standard optimization routine, such as BFGS, which will probably be faster than gradient ascent.  Line searches guarantee convergence. \tilde{A} is perfectly tractable to compute. In fact, if you have already computed approximate marginals, \tilde{A} has almost no cost. Life is good.

The only counterargument I can think of is that mean-field and loopy BP can have different local optima, which might mean that a no-line-search-refuse-to-look-at-the-objective-function-just-follow-the-gradient-and-pray style optimization could be more robust, though I’d like to see that argument made…

I’m not sure of the history, but I think part of the reason this procedure has such a bad reputation (even from people that use it!) might be that it predates the “modern” understanding of inference procedures as producing approximate partition functions as well as approximate marginals.

Functions

I was pleased the other day to have cause to discover that this is valid matlab syntax:

make_energy_fun = @(x,f) @(y,w) energy(y,x,f,w);

My thoughts:

  • This is great!  I’m creating an anonymous function which takes two parameters and returns an anonymous function taking two parameters which evaluates the energy with the original two parameters baked in. Then the (original) anonymous function is assigned to the variable make_energy_fun.
  • But… in a civilized programming language wouldn’t this be, literally, an easy problem on a freshman problem set?
  • …Why is it that we don’t use civilized programming languages, again?

Quotients

It seems to me that thinking of quotients as a fundamental operator is usually painful and unnecessary when the objects are almost anything other than real (or rational) numbers. Instead it is better to think of a quotient as a combination of the reciprocal and the product. A good example of this is complex numbers. Suppose that

z=a+bi
w=c+di.

Then, the usual rule for the quotient is that

\displaystyle{z/w = \frac{ac+bd}{c^2+d^2} + i\frac{bc-ad}{c^2+d^2}}.

This qualifies as non-memorizable. On the other hand, take the reciprocal of w

\displaystyle{1/w = \frac{c-di}{c^2+d^2}}.

This is simple enough (“the complex conjugate divided by the squared norm”), and we recover the rule for the quotient easily enough by multiplying with z.

The same thing holds true for derivatives. I’ve never been able to remember that quotient rule from high-school; Namely that if f(x)=g(x)/h(h), then

\displaystyle{f'(x) = \frac{h(x)g'(x)-h'(x)g(x)}{h(x)^2}}

Ick. Instead, better to note that if r(x) = 1/h(h) then

\displaystyle{r'(x) = \frac{-h'(x)}{h(x)^2}},

along with the standard rule for differentiating products, so that if f(x)=g(x)/h(x)=g(x)r(x), then

\displaystyle{f'(x) = g(x)r'(x)+g'(x)r(x)}.

Another case would be the “matrix quotient” B C^{-1}. Of course, everyone already thinks of the matrix multiplication and inverse as separate operations– to do otherwise would be horrible– but I think that just proves the point…

(Although, I assume that computing BC^{-1} as a single operation would be more numerically stable than first taking an explicit inverse. This might mean something to people who feel that mathematical notation ought to suggest an obvious stable implementation in IEEE floating point (if any).)

Example usage of JGMT

Here is an example of usage of the graphical models toolbox I’ve just released. I’ll use the terminology of “perturbation” to refer to computing loss gradients from the difference of two problems as in this paper, and “truncated fitting” to refer to backpropagating derivatives through the TRW inference process for a fixed number of iterations, as in this paper.

This is basically the simplest possible vision problem. We will train a conditional random field (CRF) to take some noisy image \bf x as input:

and produce marginals that well predict a binary image \bf y as output:

The noisy image \bf x is produced by setting x_i = y_i(1-t_i^n) + (1-y_i)t_i^n where t_i is random on [0,1] and n is a noise level as described in this paper. For now, we set n=1.25 which is a pretty challenging amount of noise, as you see above.

The main file, which does the learning described here can be found in the toolbox in examples/train_binary_denoisers.m.

First off, we will train a model using perturbation, with the univariate likelihood loss function, based on TRW inference, with a convergence threshold of 1e-5. We do this by typing:

>> train_binary_denoisers('pert_ul_trw_1e-5')
                                                        First-order 
 Iteration  Func-count       f(x)        Step-size       optimality
     0           1         0.692759                         0.096
     1           3         0.686739       0.432616         0.0225  
     2           5         0.682054             10         0.0182  
     3           6         0.670785              1         0.0317  
     4          10         0.542285        48.8796          0.932  
     5          12         0.515509            0.1          0.965  
     6          13         0.439039              1          0.355  
     7          14         0.302082              1          0.279  
     8          15         0.228832              1          0.471  
     9          17         0.223659       0.344464         0.0159  
    10          18         0.223422              1        0.00417  
    11          19         0.223231              1        0.00269  
    12          20         0.223227              1        0.00122  
    13          22         0.223221        4.33583       0.000857  
    14          23         0.223201              1        0.00121  
    15          24         0.223138              1        0.00306  
    16          25         0.223035              1        0.00509  
    17          26         0.222903              1        0.00564  
    18          27         0.222824              1         0.0035  
    19          28         0.222806              1       0.000945  
    20          29         0.222803              1       0.000798  
    21          30         0.222802              1        0.00079  
    22          31         0.222798              1        0.00111  
    23          32         0.222788              1        0.00168  
    24          33         0.222763              1        0.00255  
    25          34         0.222707              1        0.00364  
    26          35         0.222603              1        0.00435  
    27          36         0.222479              1        0.00339  
    28          37         0.222408              1        0.00117  
    29          38         0.222394              1       9.64e-05  
    30          39         0.222393              1       2.05e-05  
    31          40         0.222393              1       4.06e-06  
    32          41         0.222393              1       2.86e-07  

The final model trained has an error rate of 0.096. We can visualize the marginals produced by making an image where each pixel has an intensity proportional to the predicted probability that that pixel takes label “1”.

On the other hand, we might train a model using truncated fitting, with the univariate likelihood, and 5 iterations of TRW.

>> train_binary_denoisers('trunc_ul_trw_5')

Sparing you the details of the optimization, this yields a total error rate of .0984 and the marginals:

Thus, restricting to only 5 iterations pays only a small accuracy penalty compared to a right convergence threshold.

Or, we could fit using the surrogate conditional likelihood. (Here called E.M., though we don’t happen to have any hidden variables.)

>> train_binary_denoisers('em_trw_1e-5')

This yields an error rate of .1016, and the marginals:

There are many permutations of loss functions, inference algorithms, etc. (Some of which have not yet been published.) Rather than exhaust all the possibilities, I’ll just list a bunch of examples:

'pert_ul_trw_1e-5' (Perturbation + univariate likelihood + TRW with 1e-5 threshold)

'trunc_cl_trw_5' (Truncated Fitting + clique likelihood + TRW with 5 iterations)

'trunc_cl_mnf_5' (Truncated Fitting + clique likelihood + Mean Field with 5 iterations)

'trunc_em_trw_5' (Truncated EM, with TRW used to compute both upper and lower bounds on partition function + 5 iterations)

'em_trw_1e-5' (Regular EM, with TRW used to compute both upper and lower bounds on partition function + 1e-5 threshold)

'em_trw/mnf_1e-5' (Regular EM, with TRW used for upper bound and meanfield used for lower bound + 1e-5 threshold)

'pseudo' (Pseudolikelihood)

About the pseudolikelihood, let’s try it.

>> train_binary_denoisers(‘pseudo’)

This yields an near-change error rate of .44, and the marginals (produced by TRW)

Which is why you probably shouldn’t use the pseudolikelihood…

Graphical Models Toolbox

I’m releasing code for a “toolbox” of code for learning and inference with graphical models. It is focused on parameter learning using marginalization in the high-treewidth setting. Though the code is, in principle, domain independent, I’ve developed it with vision problems in mind. This means that the code is A) efficient (all the inference algorithms are implemented in C++) and B) can handle arbitrary graph structures.

There are, at present, a bunch of limitations:

  • All the inference algorithms are for marginal inference. No MAP inference, at all.
  • The code handles pairwise graphs only
  • All variables must have the same number of possible values.
  • For tree-reweighted belief propagation, a single edge appearance probability must be used for all edges

For vision, these are usually no big deal. (Except if you are a MAP inference person. But that is not advisable.) In other domains, though, these might be showstoppers.

The code can be used in a bunch of different ways, depending on if you are looking for a specific tool to use, or a large framework.

  • Just use the low-level [Inference] algorithms, namely A) Tree-Reweighted Belief propagation + variants (Loopy BP, TRW-S) or B) Mean-field. Take care of everything else yourself.
  • Use the [Differentiation] methods (back-TRW or implicit differentiation) to calculate parameter gradients by providing your own loss functions. Do everything else on your own.
  • Use the [Loss] methods (em, implicit_loss) to calculate parameter gradients by providing a true vector x and a loss name (univariate likelihood, clique likelihood, etc.) Unlike the above usages, these methods explicitly consider the conditional learning setting where one has an input and an output.
  • Use the [CRF] methods to calculate calculate almost everything (deal with parameter ties for a specific type of model, etc.) These methods consider specific classes of CRFs and given and input, output, loss function, inference method, etc. give the parameter gradient. Employing this gradient in a learning framework is quite straightforward.

Matrix Calculus

Based on a lot of requests from students, I did a lecture on matrix calculus in my machine learning class today. This was based on Minka’s Old and New Matrix Algebra Useful for Statistics and Magnus and Neudecker’s Matrix Differential Calculus with Applications in Statistics and Econometrics.

In making the notes, I used a couple innovations, which I am still debating the wisdom of. The first is the rule for calculating derivatives of scalar-valued functions of a matrix input f(X). Traditionally, this is written like so:

if dy = \text{tr}(A^T dX) then \frac{dy}{dX} = A.

I initially found the presence of the trace here baffling. However, there is the simple rule that

\text{tr}(A^T B) = A \cdot B

where \cdot is the matrix inner product. This puts the rule in the much more intuitive (to me!) form:

if dy = A \cdot dX then \frac{dy}{dX} = A.

This seems more straightforward, but it comes at a cost. When working with the rule in the trace form, one often needs to do quite a bit of shuffling around of matrices. This is easy to do using the standard trace identities like

\text{tr}(ABC)=\text{tr}(CAB).

If we are to work with inner-products, we will require a similar set of rules. It isn’t too hard to show that there are “dual” identities like

A \cdot (BC) = B \cdot (AC^T) = C \cdot (B^T A)

which allow a similar shuffling with dot products. Still, these are certainly less easy to remember.

There are also a set of other rules that seem to be needed in practice, but aren’t included in standard texts. For example, if R is a function that is applied elementwise to a matrix or vector (e.g. \sin), then

d(R(F)) = R'(F(X)) \odot dF

where \odot is the elementwise product. This then requires other (very simple) identities for getting rid of the elementwise product, such as

{\bf x}\odot{\bf y} = \text{diag}({\bf x}) {\bf y} = \text{diag}({\bf y}) {\bf x}.

Another issue with using dot products everywhere is the need to constantly convert between transposes and inner-products. (This issue comes up because I prefer a “all vectors are column vectors” convention) The never-ending debate of if we should write

{\bf x} \cdot {\bf y}

or

{\bf x}^T {\bf y}

seems to have particular importance here, and I’m not sure of the best choice.