Example usage of JGMT

Here is an example of usage of the graphical models toolbox I’ve just released. I’ll use the terminology of “perturbation” to refer to computing loss gradients from the difference of two problems as in this paper, and “truncated fitting” to refer to backpropagating derivatives through the TRW inference process for a fixed number of iterations, as in this paper.

This is basically the simplest possible vision problem. We will train a conditional random field (CRF) to take some noisy image \bf x as input:

and produce marginals that well predict a binary image \bf y as output:

The noisy image \bf x is produced by setting x_i = y_i(1-t_i^n) + (1-y_i)t_i^n where t_i is random on [0,1] and n is a noise level as described in this paper. For now, we set n=1.25 which is a pretty challenging amount of noise, as you see above.

The main file, which does the learning described here can be found in the toolbox in examples/train_binary_denoisers.m.

First off, we will train a model using perturbation, with the univariate likelihood loss function, based on TRW inference, with a convergence threshold of 1e-5. We do this by typing:

>> train_binary_denoisers('pert_ul_trw_1e-5')
                                                        First-order 
 Iteration  Func-count       f(x)        Step-size       optimality
     0           1         0.692759                         0.096
     1           3         0.686739       0.432616         0.0225  
     2           5         0.682054             10         0.0182  
     3           6         0.670785              1         0.0317  
     4          10         0.542285        48.8796          0.932  
     5          12         0.515509            0.1          0.965  
     6          13         0.439039              1          0.355  
     7          14         0.302082              1          0.279  
     8          15         0.228832              1          0.471  
     9          17         0.223659       0.344464         0.0159  
    10          18         0.223422              1        0.00417  
    11          19         0.223231              1        0.00269  
    12          20         0.223227              1        0.00122  
    13          22         0.223221        4.33583       0.000857  
    14          23         0.223201              1        0.00121  
    15          24         0.223138              1        0.00306  
    16          25         0.223035              1        0.00509  
    17          26         0.222903              1        0.00564  
    18          27         0.222824              1         0.0035  
    19          28         0.222806              1       0.000945  
    20          29         0.222803              1       0.000798  
    21          30         0.222802              1        0.00079  
    22          31         0.222798              1        0.00111  
    23          32         0.222788              1        0.00168  
    24          33         0.222763              1        0.00255  
    25          34         0.222707              1        0.00364  
    26          35         0.222603              1        0.00435  
    27          36         0.222479              1        0.00339  
    28          37         0.222408              1        0.00117  
    29          38         0.222394              1       9.64e-05  
    30          39         0.222393              1       2.05e-05  
    31          40         0.222393              1       4.06e-06  
    32          41         0.222393              1       2.86e-07  

The final model trained has an error rate of 0.096. We can visualize the marginals produced by making an image where each pixel has an intensity proportional to the predicted probability that that pixel takes label “1”.

On the other hand, we might train a model using truncated fitting, with the univariate likelihood, and 5 iterations of TRW.

>> train_binary_denoisers('trunc_ul_trw_5')

Sparing you the details of the optimization, this yields a total error rate of .0984 and the marginals:

Thus, restricting to only 5 iterations pays only a small accuracy penalty compared to a right convergence threshold.

Or, we could fit using the surrogate conditional likelihood. (Here called E.M., though we don’t happen to have any hidden variables.)

>> train_binary_denoisers('em_trw_1e-5')

This yields an error rate of .1016, and the marginals:

There are many permutations of loss functions, inference algorithms, etc. (Some of which have not yet been published.) Rather than exhaust all the possibilities, I’ll just list a bunch of examples:

'pert_ul_trw_1e-5' (Perturbation + univariate likelihood + TRW with 1e-5 threshold)

'trunc_cl_trw_5' (Truncated Fitting + clique likelihood + TRW with 5 iterations)

'trunc_cl_mnf_5' (Truncated Fitting + clique likelihood + Mean Field with 5 iterations)

'trunc_em_trw_5' (Truncated EM, with TRW used to compute both upper and lower bounds on partition function + 5 iterations)

'em_trw_1e-5' (Regular EM, with TRW used to compute both upper and lower bounds on partition function + 1e-5 threshold)

'em_trw/mnf_1e-5' (Regular EM, with TRW used for upper bound and meanfield used for lower bound + 1e-5 threshold)

'pseudo' (Pseudolikelihood)

About the pseudolikelihood, let’s try it.

>> train_binary_denoisers(‘pseudo’)

This yields an near-change error rate of .44, and the marginals (produced by TRW)

Which is why you probably shouldn’t use the pseudolikelihood…

Advertisements
Posted in Uncategorized | Leave a comment

Graphical Models Toolbox

I’m releasing code for a “toolbox” of code for learning and inference with graphical models. It is focused on parameter learning using marginalization in the high-treewidth setting. Though the code is, in principle, domain independent, I’ve developed it with vision problems in mind. This means that the code is A) efficient (all the inference algorithms are implemented in C++) and B) can handle arbitrary graph structures.

There are, at present, a bunch of limitations:

  • All the inference algorithms are for marginal inference. No MAP inference, at all.
  • The code handles pairwise graphs only
  • All variables must have the same number of possible values.
  • For tree-reweighted belief propagation, a single edge appearance probability must be used for all edges

For vision, these are usually no big deal. (Except if you are a MAP inference person. But that is not advisable.) In other domains, though, these might be showstoppers.

The code can be used in a bunch of different ways, depending on if you are looking for a specific tool to use, or a large framework.

  • Just use the low-level [Inference] algorithms, namely A) Tree-Reweighted Belief propagation + variants (Loopy BP, TRW-S) or B) Mean-field. Take care of everything else yourself.
  • Use the [Differentiation] methods (back-TRW or implicit differentiation) to calculate parameter gradients by providing your own loss functions. Do everything else on your own.
  • Use the [Loss] methods (em, implicit_loss) to calculate parameter gradients by providing a true vector x and a loss name (univariate likelihood, clique likelihood, etc.) Unlike the above usages, these methods explicitly consider the conditional learning setting where one has an input and an output.
  • Use the [CRF] methods to calculate calculate almost everything (deal with parameter ties for a specific type of model, etc.) These methods consider specific classes of CRFs and given and input, output, loss function, inference method, etc. give the parameter gradient. Employing this gradient in a learning framework is quite straightforward.
Posted in Uncategorized | Tagged , , | 3 Comments

Matrix Calculus

Based on a lot of requests from students, I did a lecture on matrix calculus in my machine learning class today. This was based on Minka’s Old and New Matrix Algebra Useful for Statistics and Magnus and Neudecker’s Matrix Differential Calculus with Applications in Statistics and Econometrics.

In making the notes, I used a couple innovations, which I am still debating the wisdom of. The first is the rule for calculating derivatives of scalar-valued functions of a matrix input f(X). Traditionally, this is written like so:

if dy = \text{tr}(A^T dX) then \frac{dy}{dX} = A.

I initially found the presence of the trace here baffling. However, there is the simple rule that

\text{tr}(A^T B) = A \cdot B

where \cdot is the matrix inner product. This puts the rule in the much more intuitive (to me!) form:

if dy = A \cdot dX then \frac{dy}{dX} = A.

This seems more straightforward, but it comes at a cost. When working with the rule in the trace form, one often needs to do quite a bit of shuffling around of matrices. This is easy to do using the standard trace identities like

\text{tr}(ABC)=\text{tr}(CAB).

If we are to work with inner-products, we will require a similar set of rules. It isn’t too hard to show that there are “dual” identities like

A \cdot (BC) = B \cdot (AC^T) = C \cdot (B^T A)

which allow a similar shuffling with dot products. Still, these are certainly less easy to remember.

There are also a set of other rules that seem to be needed in practice, but aren’t included in standard texts. For example, if R is a function that is applied elementwise to a matrix or vector (e.g. \sin), then

d(R(F)) = R'(F(X)) \odot dF

where \odot is the elementwise product. This then requires other (very simple) identities for getting rid of the elementwise product, such as

{\bf x}\odot{\bf y} = \text{diag}({\bf x}) {\bf y} = \text{diag}({\bf y}) {\bf x}.

Another issue with using dot products everywhere is the need to constantly convert between transposes and inner-products. (This issue comes up because I prefer a “all vectors are column vectors” convention) The never-ending debate of if we should write

{\bf x} \cdot {\bf y}

or

{\bf x}^T {\bf y}

seems to have particular importance here, and I’m not sure of the best choice.

Posted in Uncategorized | Tagged , , | 14 Comments

Automatic Differentiation Without Compromises

Automatic differentiation is a classic numerical method that takes a program, and (with minimal programmer effort) computes the derivatives of that program. This is very useful because, when optimizing complex functions, a lot of time tends to get spent manually deriving and then writing code for derivatives. Some systems like cvx do a great job of recognizing simple problem types (linear programs, cone programs, etc.), but can’t handle arbitrary functions.

At present, automatic differentiation involves a degree of pain. Typically, the user needs to write C or C++ or Fortran code and augment the code with “taping” commands to get it to work. There is also usually a significant computational overhead.

In a just world, there would be no pain, and there would be no computational overhead. In a just world, reverse-mod autodiff would work like this:

STEP 1 The user writes the function they want to optimize in a convenient high level language like Python.

This is easy, but slow, and provides no derivatives.

Consider a function that sums the elements in a matrix product. The user (apparently unaware that numpy provides matrix multiplication) writes:

    def fun(A,B):
        rez = 0
        for i in range(A.shape[0]):
            for j in range(B.shape[1]):
                rez += dot(A[i,:],B[:,j])
        return rez

STEP 2 The user feeds their high level function into a library. This library uses operator overloading magic to build an expression graph representation of the function. This expression graph is then used to generate efficient machine ode for both the function, and its derivatives.

The user merely should have to do something like

    A = numpy.random.rand(3,5)
    B = numpy.random.rand(5,4)
    dfun = compile_fun(fun,A,B)

They could then compute the function and derivatives (all in compiled code) using

    F,[dFdA,dFdB] = dfun(A,B)

Thus, the user gets everything, with out having to compromise: maximum performance, and maximum convenience.


Since this is how things should work, I spent some time this past summer writing a library that does. The above code is actually a working example, calling the compile_fun method– the only method a consumer of the library needs to know about.  This library comes tantalizingly close to my goals.

Another simple example, mixing matrices and scalars:

def fun(A,B,a):
    return sum(sum((A*a-B)**2))
a = 2.
A = random.rand(15,22)
B = random.rand(15,22)
import etree
dfun = etree.compile_fun(fun,A,B,a)
F,[dFdA,dFdB,dFda] = dfun(A,B,a)

Consider the above matrix multiplication example, with 60×60 matrices. The library can generate the expression graph, transform that into a simple bytecode, than transform that bytecode into C code acceptably quickly– 11.97 seconds on my machine.  The code then runs very fast:  .0086 seconds as opposed to .0254 seconds for just running the original function or .0059 seconds for calling numpy’s matrix multiplication routine dot(A,B). (Which, of course, does not provide derivatives!)

There is, inevitably, one large problem:  horrific compilation times on large functions.  To take the C code and transform it into machine code, gcc takes 1116 seconds.  Why, you ask?  The reason is because gcc is trying to compile a single 36.5 MB function:

#include "math.h"
void rock(double* A, double* D){
A[12800]=A[0] * A[6400];
A[12801]=A[1] * A[6480];
A[12802]=A[12800] + A[12801];
A[12803]=A[2] * A[6560];
A[12804]=A[12802] + A[12803];
A[12805]=A[3] * A[6640];
// thousands of pages of same
D[12801] = D[12802];
D[1] += D[12801] * A[6480];
D[6480] += D[12801] * A[1];
D[0] += D[12800] * A[6400];
D[6400] += D[12800] * A[0];
}

Though this is very simple code, it seems that gcc still uses some super-linear algorithms, so it can’t handle large functions.

(To be clear, for small or medium sized problems, the code compiles very quickly.)

Now, clearly, I am doing something wrong. Frankly I am writing this largely in the hope that someone who actually knows something about compilers and programming languages can enlighten me.

1) Use a better compiler.  I’ve tried this.  I can turn off gcc’s optimization, which decreases compilation times, though to a still very slow level.  Alternatively, I could use a fast compiler.  tcc flies right through the code.  Unfortunately, the machine code that tcc generates is very slow– I think due to poor register allocation.

2) Don’t generate intermediate C code– directly generate machine code or assembly.  This would certainly work, but there is no way I am up to it. Maybe it would make sense to target the JVM?

3) Use one of the newfangled just in time virtual machine things, like LLVM or libJIT.  This seems promising, but after quite a bit of research I’m still really unsure about how well this is likely to work, or how to proceed.

4) Write a fast interpreter, and just pass the bytecode to this.  I’ve tried this as well.  This is basically the strategy used by some autodiff packages, e.g. cppad. This can get reasonable performance, but will never compete with native code. The reason is– I think– that the interpreter is basically a giant switch statement, and all the branching screws up the pipeline on the CPU.

Anyway, the code is available as etree.py, and a tiny file mycode.h. (Let’s say it’s available under the MIT license, and the condition that you not laugh too much at my programming skills.) To try it out, just start python and do

import etree
etree.runtests()

At this point, the library is useful (I use it!) but only for relatively small problems, containing no more than, say, a few thousand variables.

If anyone has ideas for better compiler flags, or knows about the feasibility of targeting LLVM or libJIT instead of C code, please get in touch.

Disclaimer: There are a bunch of minor issues with the library. These are all fixable, but it isn’t really worth the effort unless the compilation times can be dealt with.

  • The function cannot branch. (No if statements).
  • If you want to multiply a matrix by a scalar, the scalar must come after the matrix.
  • I know it works on 32 bit ubuntu with everything installed via synaptic package manager, and on 64 bit OS X, simply using Sage (with this patch to make scipy/weave work). If you want to play around with this, but aren’t a python programmer, I would suggest using Sage.
Posted in Uncategorized | Tagged , , | 7 Comments

Notation is evil

Exhibit A:

We have the symbol \propto,with the interpretation

x \propto y \leftrightarrow x = c y for some number c,

but there doesn’t appear to exist a symbol (Here, I use a boxed question mark: \boxed{?} to denote the symbol I claim doesn’t exist) with the interpretation

x \boxed{?} y \leftrightarrow x = y + c for some number c.

This pains me.  People sometimes have to resort to writing something like

y = f(x)+\text{const} (1)

= g(x) + \text{const} (2)

where the constants are (in general) different on lines (1) and (2).”

Even worse (or maybe not?), sometimes people seem to leave exponents lying around when they otherwise wouldn’t, e.g. write

\exp(y) \propto \exp(f(x)) \propto \exp(g(x)).

Exhibit B:

We have no symbol meaning “normalized sum”.  How many thousands of times have you seen some variant of

y = \frac{1}{N}\sum_{x \in X} f(x)

where N=|X|“?

Why do we need to define N?  Can’t we just use another mystery symbol to write

y = \boxed{?}_{x \in X} f(x)?

In some situations you could use \text{mean}, but that doesn’t always really work and is rarely done.

Posted in Uncategorized | 3 Comments

Fitting an inference algorithm instead of a model

One recent trend seems to be the realization that one can get better performance by tuning a CRF (Conditional Random Field) to a particular inference algorithm. Basically, forget about the distribution that the CRF represents, and instead only care how accurate are the results that pop out of inference. An extreme example of this is the recent paper Learning Real-Time MRF Inference for Image Denoising by Adrian Barbu.

The basic idea is to fit a FoE (Field of Experts) image prior such that when one takes a very few gradient descent steps on a denoising posterior, the results are accurate. From the abstract:

We argue that through appropriate training, a MRF/CRF model can be trained to perform very well on a suboptimal inference algorithm. The model is trained together with a fast inference algorithm through an optimization of a loss function […] We apply the proposed method to an image denoising application […] with a 1-4 iteration gradient descent inference algorithm. […] the proposed training approach obtains an improved benchmark performance as well as a 1000-3000 times speedup compared to the Fields of Experts MRF.

The implausible-sounding 1000-fold speedup comes simply from using only 4 iterations of gradient descent rather than several thousand. (Incidentally, L-BFGS requires far fewer iterations for this problem.) The results are a bit better than the generative FoE model– that takes much more work for training and inference.

I have every confidence that this does work well, and similar strategies could probably be used to create fast inference models/algorithms for many different problems. My thesis was largely an attempt to do the same thing for marginal, rather than MAP inference.

The disturbing/embarrassing question, for me, is does this really have anything to do with probabilistic modeling any more? Formally speaking, a probability density is being fit, but I doubt it would transfer to, say, inpainting, or that samples from the density would look like natural images. The best interpretation of what is happening might be that one is simply fitting a big, nonlinear, black box function approximation.

It seems that the more effort we expend to wring the best performance out of a probabilistic model, the less “probabilistic” it is.

P.S. Some of my friends have invited me to never mention autodiff again, ever, but this is one of the many papers where I think the learning optimization would be made much easier/faster by using it.
Posted in Uncategorized | Tagged , , | 1 Comment

Bird

Posted in Uncategorized | Tagged , | 1 Comment