
Archives
 November 2017
 May 2017
 April 2017
 December 2016
 September 2015
 December 2014
 February 2014
 January 2014
 September 2013
 September 2012
 January 2012
 November 2011
 October 2011
 July 2011
 May 2011
 March 2011
 November 2009
 October 2009
 August 2009
 June 2009
 May 2009
 March 2009
 February 2009
 January 2009
 December 2008
 November 2008
 October 2008
 August 2008
 July 2008
 June 2008

Meta
Tag Archives: machine learning
A Divergence Bound For Hybrids of MCMC and Variational Inference and …
At ICML I recently published a paper that I somehow decided to title “A Divergence Bound for Hybrids of MCMC and Variational Inference and an Application to Langevin Dynamics and SGVI”. This paper gives one framework for building “hybrid” algorithms … Continue reading
You deserve better than twosided finite differences
In calc 101, the derivative is derived as . So, if you want to estimate a derivative, an easy way to do so would be to just pick some small and estimate: This can work OK. Let’s look at an … Continue reading
Sneaking up on Bayesian Inference (A fable in four acts)
Act 1: Magical Monkeys Two monkeys, Alfred () and Betty () live in a parallel universe with two kinds of blocks, green () and yellow (). Alfred likes green blocks, and Betty prefers the yellow blocks. One day, a Wizard … Continue reading
Favorite things NIPS
I always enjoy reading conference reports, so I thought I’d mention a few papers that caught my eye. (I welcome any corrections to my summaries of any of these.) 1. Recent Progress in the Structure of LargeTreewidth Graphs and Some … Continue reading
Truncated BiLevel Optimization
In 2012, I wrote a paper that I probably should have called “truncated bilevel optimization”. I vaguely remembered telling the reviewers I would release some code, so I’m finally getting around to it. The idea of bilevel optimization is quite … Continue reading
Posted in Uncategorized
Tagged crossvalidation, machine learning, matlab, optimization, regularization
5 Comments
Reducing Sigmoid computations by (at least) 88.0797077977882%
A classic implementation issue in machine learning is reducing the cost of computing the sigmoid function . Specifically, it is common to profile your code and discover that 90% of the time is spent computing the in that function. This … Continue reading
Posted in Uncategorized
Tagged boltzmann machines, efficiency, machine learning, math, neural networks
9 Comments
CRF Toolbox Updated
I updated the code for my Graphical Models / Conditional Random Fields toolbox This is a Matlab toolbox, though almost all the real work is done in compiled C++ for efficiency. The main improvements are: Lots of bugfixes. Various small … Continue reading