Author Archives: justindomke

The second and third best features of LyX you aren’t using.

LyX is a WYSIWYG editor for latex files. It’s a little bit clunky to use at first, and isn’t perfect (thank you, open source developers– I’m not ungrateful!) but after becoming familiar with it, it’s probably the single piece of … Continue reading

Posted in Uncategorized | Tagged , , , , , | Leave a comment

You deserve better than two-sided finite differences

In calc 101, the derivative is derived as . So, if you want to estimate a derivative, an easy way to do so would be to just pick some small and estimate: This can work OK. Let’s look at an … Continue reading

Posted in Uncategorized | Tagged , | 3 Comments

Sneaking up on Bayesian Inference (A fable in four acts)

Act 1: Magical Monkeys Two monkeys, Alfred () and Betty () live in a parallel universe with two kinds of blocks, green () and yellow (). Alfred likes green blocks, and Betty prefers the yellow blocks. One day, a Wizard … Continue reading

Posted in Uncategorized | Tagged | 2 Comments

Algorithmic Dimensions

There are many dimensions on which we might compare a machine learning or data mining algorithm. A few of the first that come to mind are: 1) Sample complexity, convergence How much predictive power is the algorithm able to extract … Continue reading

Posted in Uncategorized | 2 Comments

Favorite things NIPS

I always enjoy reading conference reports, so I thought I’d mention a few papers that caught my eye.  (I welcome any corrections to my summaries of any of these.) 1. Recent Progress in the Structure of Large-Treewidth Graphs and Some … Continue reading

Posted in Uncategorized | Tagged , | 1 Comment

Truncated Bi-Level Optimization

In 2012, I wrote a paper that I probably should have called “truncated bi-level optimization”.  I vaguely remembered telling the reviewers I would release some code, so I’m finally getting around to it. The idea of bilevel optimization is quite … Continue reading

Posted in Uncategorized | Tagged , , , , | 5 Comments

Reducing Sigmoid computations by (at least) 88.0797077977882%

A classic implementation issue in machine learning is reducing the cost of computing the sigmoid function . Specifically, it is common to profile your code and discover that 90% of the time is spent computing the in that function.  This … Continue reading

Posted in Uncategorized | Tagged , , , , | 9 Comments