You have some function . You have figured out how to compute it’s gradient, . Now, however, you find that you are implementing some algorithm (like, say, Stochastic Meta Descent), and you need to compute the product of the Hessian with certain vectors. You become very upset, because either A) you don’t feel like deriving the Hessian (probable), or B) the Hessian has elements, where is the length of and that is too big to deal with (more probable). What to do? Behold:

Consider the Hessian-Vector product we want to compute, . For small ,

And so,

This trick above has been around apparently forever. The approximation becomes exact in the limit . Of course, for small , numerical problems will also start to kill you. Pearlmutter’s algorithm is a way to compute with the same complexity, with out suffering rounding errors. Unfortunately, Pearlmutter’s algorithm is kind of complex, while the above is absolutely trivial.

**Update**: Perlmutter himself comments below that if we want to use the finite difference trick, we would do better to use the approximation:

This expression will be closer to the true value for larger , meaning that we are less likely to get hurt by rounding error. This is nicely illustrated on page 6 of these notes.

### Like this:

Like Loading...

*Related*

Thanks for the cite. Couple nits.

(a) If you’re going to use finite differences, you want (g(x+rv)-g(x-rv)) / 2r, to reduce discretization error from O(r) to O(r^2). This does not solve the “both sins of numeric analysis in one expression” issue, of course.

(b) If you have AD available, you can convert g() to Hv() using a forward-mode AD xform. Most current high-performance AD tools won’t do nested applications though, and if you got g() from f() by applying reverse-mode AD to f() then going from g() to Hv() is a nested application.

(c) This trick was known before the paper you cite (see refs therein, e.g., Werbos) so although I’m flattered…

Jeff Siskind and I have been working on a super-aggressive AD-enabled optimizing compiler that can handle nesting. You’re welcome to play with it (on web, GPLed, “stalingrad” compiler for “VLAD” language), and we’d love any feedback, but it is a research prototype without amenities.

Thanks for the response. I’ve updated the post to mention that one should really used centered differences. (I should know better than that, really.)

I couldn’t quite figure out where the finite difference trick originated. I meant to imply in the post that it came from the misty past somewhere.

I’ve been very interested in your stalingrad project. It seems like the only game in town, in terms of joint autodiff + optimizing compiler. I tried downloading the code and running the examples a few months ago. One inevitable comment is that the project could use more documentation. I was able to figure out, by searching around newsgroup postings and the like, that: 1) stalin is a fast scheme-to-c compiler, 2) vlad is the name of a new language closely related to scheme , and 3) stalingrad is an implementation of vlad, but this took some effort. I haven’t been able to find any documentation for vlad, aside from the brief description on p. 17 of the paper. Being able to write schemeish code, and get fortran speed with free nested application of AD is *very* attractive!

The trick is used to calculate the product of Hv, but in the weight update equation we need H^-1v.

Am I wrong somewhere?

In what weight update equation do you mean? For Newton’s method, sure, you’d want inv(H)v and the trick above can only be useful for some kind of iterative method to solve the system. On the other hand, with Stochastic Meta Descent (which was one motivation for this):

https://justindomke.wordpress.com/stochastic-meta-descent/

You do indeed just want H*v. Many other applications are similar.

Thanks for sharing your thoughts on segway rijden eindhoven. Regards

Reblogged this on Artificial Intelligence Research Blog.