I ran into the problem today. To my surprise, I couldn’t find a standard solution.
Suppose that your data is , which you have stored as where . The obvious thing to do is to just exponentiate and then compute the variance. That would be something like the following:
This of course is a terrible idea: When is large, you can’t even write down without running into numerical problems.
The first idea I had for this problem was relatively elegant. We can of course represent the variance as
Instead of calculating and , why not calculate the log of these quantities?
To do this, we can introduce a “log domain mean” operator, a close relative of the good-old scipy.special.logsumexp
def log_domain_mean(logx): "np.log(np.mean(np.exp(x))) but more stable" n = len(logx) damax = np.max(logx) return np.log(np.sum(np.exp(logx-damax))) \ + damax-np.log(n)
Next, introduce a “log-sub-add” operator. (A variant of np.logaddexp
)
def logsubadd(a,b): "np.log(np.exp(a)-np.exp(b)) but more stable" return a + np.log(1-np.exp(b-a))
Then, we can compute the log-variance as
def log_domain_var(logx): a = log_domain_mean(2*logx) b = log_domain_mean(logx)*2 c = logsubadd(a,b) return c
Here a
is while b
is .
Nice, right? Well, it’s much better then the first solution. But it isn’t that good. The problem is that when the variance is small, a
and b
are close. When they are both close and large, logsubadd
runs into numerical problems. It’s not clear that there is a way to fix this problem with logsubadd
.
To solve this, abandon elegance!
For the good solution, the math is a series of not-too-intuitive transformations. (I put them at the end.) These start with
and end with
Why this form? Well, we’ve reduced to things we can do relatively stably: Compute the log-mean, and do a (small variant of) log-sum-exp.
def log_domain_var(logx): """like np.log(np.var(np.exp(logx))) except more stable""" n = len(logx) log_xmean = log_domain_mean(logx) return np.log(np.sum( np.expm1(logx-log_xmean)**2))\ + 2*log_xmean - np.log(n)
This uses the log_domain_mean
implementation from above, and also np.expm1
to compute in a more stable wauy when a
is close to zero.
Why is this stable? Is it really stable? Well, umm, I’m not sure. I derived transformations that “looked stable” to me, but there’s no proof that this is best. I’d be surprised if a better solution wasn’t possible. (I’d also be surprised if there isn’t a paper from 25+ years ago that describes that better solution.)
In any case, I’ve experimentally found that this function will (while working in single precision) happily compute the variance even when logx
is in the range of to , which is about 28 orders of magnitude better than the naive solution and sufficient for my needs.
As always, failure cases are probably out there. Numerical instability always wins when it can be bothered to make an effort.
“Approximate Bayesian Computation” sounds like a broad class of methods that would potentially include things like message passing, variational methods, MCMC, etc. However, for historical reasons, the term is used for a very specialized class of methods.
The core idea is as follows:
Sample from the posterior using rejection sampling, with the accept/reject decision made by generating a synthetic dataset and comparing it to the observed one.
Take a model . Assume we observe some fixed and want to sample from Assume is discrete.
Algorithm (Basic ABC):
Claim: This algorithm returns an exact sample from the posterior
Proof: The probability of returning is the probability of (i) drawing from the prior, and (ii) sampling conditional on . Thus
The probability of returning in any one iteration is the posterior times the constant So this gives exact samples.
There’s two special properties:
Of course, the problem is that you’ll typically be unlikely to exactly hit . Formally speaking, the probability of returning anything in a given loop is
In high dimensions, will typically be small unless you tend to get the same data regardless of the value of the latent variable. (In which case is the problem even interesting?)
This is almost exactly rejection sampling. Remember that in general if you want to sample from you need a proposal distribution you can sample from and you need to know a constant such that The above algorithm is just using
Then, is a valid proposal since is equivalent to which is always true.
Why isn’t this exactly rejection sampling? In traditional descriptions of rejection sampling, you’d need to calculate and . In the ABC setting we can’t calculate either of these, but we exploit that we can calculate the ratio
To increase the chance of accepting (or make the algorithm work at all if is continuous) you need to add a “slop factor” of . You change the algorithm to instead accept if for some small . The value of introduces an accuracy computation tradeoff. However, this doesn’t solve the fundamental problem if things don’t scale that well to high dimensions.
Another idea to reduce expense is to instead compare summary statistics. That is, find some function and accept if rather than if as before.
If we make this change, then the probability of returning in any one iteration is
Above we define and
The probability of returning anything in a given round is
There’s good news and bad news about making this change.
Good news:
Bad news:
Exponential family
Often, summary statistics are used even though they introduce errors. It seems to be a bit of a craft to find good summary statistics to speed things up without introducing too much error.
There is on appealing case where no error is introduced. Suppose is in the exponential family and are the sufficient statistics for that family. Then, we know that . This is very nice.
Slop factors
If you’re using a slop factor, you can instead accept according to
This introduces the same kind of computation / accuracy tradeoff.
Before getting to ABC-MCMC, suppose we just wanted for some reason to use Metropolis-Hastings to sample from the prior . If our proposal distribution was then we’d do
Algorithm: (Regular old Metropolis-Hastings)
Now suppose we instead want to sample from the posterior instead. We will suggest the following algorithm instead, with changes shown in blue.
Algorithm: (ABC-MCMC)
There is only one difference: After proposing , we generate a synthetic dataset. We can accept only if the synthetic dataset is the same as the observed one.
What this solves
There are two computational problems that the original ABC algorithm can encounter:
The MCMC-ABC algorithm seems intended to deal with the first problem: If the proposal distributon only yields nearby points, than once the typical set has been reached, the probability of propsing a “good” is much higher.
On the other hand, MCMC-ABC algorithm seems to do little to address the second problem.
Justification
Now, why is this a correct algorithm? Consider the augmented distribution
We now want to sample from using Metropolis-Hastings. We choose the proposal distribution
The acceptance probability will then be
Since the original state was accepted, it must be true that Thus, the above can be simplified into
Generalizations
If using summary statistics, you change into You can also add a slop factor if you want.
More generally still, we could instead use the augmented distribution
The proposal can be something interesting like a Multivariate Gaussian. The acceptance probability then instead becomes
Of course, this introduces some error.
Choosing
In practice, a good value of at the end will lead to very slow progress at the beginning. Best to slowly reduce over time. Seems like shooting for a low 1% acceptance rate at the end is a good compromise. A higher acceptance rate would mean that too much error was introduced.
(Thanks to Javier Burroni for helpful comments.)
]]>Some people feel intimidated by the prospect of putting a “theorem” into their papers. They feel that their results aren’t “deep” enough to justify this. Instead, they give the derivation and result inline as part of the normal text.
Sometimes that’s best. However, the purpose of a theorem is not to shout to the world that you’ve discovered something incredible. Rather, theorems are best thought of as an “API for ideas”. There are two basic purposes:
To decide if you should create a theorem, ask if these goals will be advanced by doing so.
A thoughtful API makes software easier to use: The goal is to allow the user as much functionality as possible with as simple an interface as possible, while isolating implementation details. If you have a long chain a mathematical argument, you should choose what parts to write as theorems/lemmas in much the same way.
Many paper intermingle definitions, assumptions, proof arguments, and the final results. Have pity on the reader, and tell them in a single place what you are claiming, and under what assumptions. The “proof” section separates your evidence for your claim from the claim itself. Most readers want to understand your result before looking at the proof. Let them. Don’t make them hunt to figure out what your final result it.
Perhaps controversially, I suggest you should use the above reasoning even if you aren’t being fully mathematically rigorous. It’s still kinder to the reader to state your assumptions informally.
As an example of why it’s helpful to explicitly state your results, here’s an example from a seminal paper, so I’m sure the authors don’t mind. (Click on the image for a larger excerpt.)
This proof is well written. The problem is many small uncertainties that accumulate as you read it. If you try to understand exactly:
You will find that the proof “bleeds in” to the result itself. The convergence rate in 2.13 involves defined in 2.10, which itself involves other assumptions tracing backwards in the paper.
Of course, not every single claim needs to be written as a theorem/lemma/claim. If your result is simple to state and will only be used in a “one-off” manner, it may be clearer just to leave it in the text. That’s analogous to “inlining” a small function instead of creating another one.
I sometimes see a proof like this (for )
Take the quantity
Pulling out this becomes
Factoring the denominator, this is
Etc.
For some proofs, the text between each line just isn’t that helpful. To a large degree it makes things more confusing– without an equality between the lines, you need to read the words to understand how each formula is supposed to be related to the previous one. Consider this alternative version of the proof:
In some cases, this reveals the overall structure of the proof better than a bunch of lines with interspersed text. If explanation is needed, it can be better to put it at the end, e.g. as “where line 2 follows from [blah] and line 3 follows from [blah]”.
It can also be helpful to put these explanations inline, i.e. to us a proof like
Again, this is not the best solution for all (or even most) cases, but I think it should be used more often than it is.
Many proofs involve manipulating chains of inequalities. When doing so, it should be obvious at what steps extra looseness may have been introduced. Suppose you have some positive constants and with and you need to choose so as to ensure that .
People will often prove a result like the following:
Lemma: If , then .
Proof: Under the stated condition, we have that
That’s all correct, but doesn’t something feel slightly “magical” about the proof?
There are two problems: First, the proof reveals nothing anything about how you came up with the final answer. Second, the result leaves ambiguous if you have introduced additional looseness. Given the starting assumption, is it possible to prove a stronger bound?
I think the following lemma and proof are much better:
Lemma: if and only if .
Proof: The following conditions are all equivalent:
The proof shows exactly how you arrived at the final result, and shows that there is no extra looseness. It’s better not to “pull a rabbit out of a hat” in a proof if not necessary.
This is arguably one of the most basic possible proof techniques, but is bizarrely underused. I think there’s two reasons why:
I use the (surprisingly controversial) convention of using a sans-serif font for random variables, rather than capital letters. I’m convinced this is the least-bad option for the machine learning literature, where many readers seem to find capital letter random variables distracting. It also allows you to distinguish matrix-valued random variables, though that isn’t used here.
]]>If you zoom out, the big picture is more conceptual than mathematical. Statistics has a crazy, grasping ambition: it wants to tell you how to best use observations to make decisions. For example, you might look at how much it rained each day in the last week, and decide if you should bring an umbrella today. Statistics converts data into ideal actions.
Here, I’ll try to explain this view. I think it’s possible to be quite precise about this while using almost no statistics background and extremely minimal math.
The two important characters that we meet are decision rules and loss functions. Informally, a decision rule is just some procedure that looks at a dataset and makes a choice. A loss function — a basic concept from decision theory– is a precise description of “how bad” a given choice is.
Let’s say you’re confronted with a coin where the odds of heads and tails are not known ahead of time. Still, you are allowed to observe how the coin performs over a number of flips. After that, you’ll need to make a “decision” about the coin. Explicitly:
Simple enough, right? Remember, is the total number of heads after flips. If you do some math, you can work out a formula for : the probability of seeing exactly heads. For our purposes, it doesn’t really matter what that formula is, just that it exists. It’s known as a Binomial distribution, and so is sometimes written .
Here’s an example of what this looks like with and .
Naturally enough, if , with flips, you tend to see around heads. Here’s an example with . Here, the most common value is , close to .
After observing some coin flips, what do we do next? You can imagine facing various possible situations, but we will use the following:
Our situation: After observing n coin flips, you need to guess “heads” or “tails”, for one final coin flip.
Here, you just need to “decide” what the next flip will be. You could face many other decisions, e.g. guessing the true value of w.
Now, suppose that you have a friend who seems very skilled at predicting the final coinflip. What information would you need to reproduce your friend’s skill? All you need to know is if your friend predicts heads or tails for each possible value of k. We think of this as a decision rule, which we write abstractly as
This is just a function of one integer . You can think of this as just a list of what guess to make, for each possible observation, for example:
One simple decision rule would be to just predict heads if you saw more heads than tails, i.e. to use
The goal of statistics is to find the best decision rule, or at least a good one. The rule above is intuitive, but not necessarily the best. And… wait a second… what does it even mean for one decision rule to be “better” than another?
What happens after you make a prediction? Consider our running example. There are many possibilities, but here are two of the simplest:
We abstract these through a concept of a loss function. We write this as
.
The first input is the true (unknown) value , while second input is the “prediction” you made. We want the loss to be small.
Now, one point might be confusing. We defined our situation as predicting the next coinflip, but now is defined comparing to , not to a new coinflip. We do this because comparing to gives the most generality. To deal with our situation, just use the average amount of money you’d lose if the true value of the coin were . Take loss A. If you predict “tails”, you’ll be wrong with probability , while if you predict “heads”, you’ll be wrong with probability , and so lose dollars on average. This leads to the loss
For loss B, the situation is slightly different, in that you lose 10 times as much in the first case. Thus, the loss is
The definition of a loss function might feel circular– we minimize the loss because we defined the loss as the thing that we want to minimize. What’s going on? Well, a statistical problem has two separate parts: a model of the data generating process, and a loss function describing your goals. Neither of these things is determined by the other.
So, the loss function is part of the problem. Statistics wants to give you what you want. But you need to tell statistics what that is.
Despite the name, a “loss” can be negative– you still just want to minimize it. Machine learning, always optimistic, favors “reward” functions that are to be maximized. Plus ça change.
OK! So, we’ve got a model of our data generating process, and we specified some loss function. For a given w, we know the distribution over k, so… I guess… we want to minimize it?
Let’s define the risk to be the average loss that a decision rule gives for a particular value of w. That is,
Here, the second input to is a decision rule– a precise recipe of what decision to make in each possible situation.
Let’s visualize this. As a set of possible decision rules, I will just consider rules that predict “heads” if they’ve seen at least m heads, and “tails” otherwise:
With there are such decision rules, corresponding to , (always predict heads), (predict heads if you see at least one heads), up to (always predict tails). These are shown here:
These rules are intuitive: if you’d predict heads after observing 16 heads out of 21, it would be odd to predict tails after seeing 17 instead! It’s true that for losses and , you don’t lose anything by restricting to this kind of decision rule. However, there are losses for which these decision rules are not enough. (Imagine you lose more when your guess is correct.)
With those decision rules in place, we can visualize what risk looks like. Here, I fix , and I sweep through all the decision rules (by changing ) with loss :
The value in the bottom plot is the total area of the green bars in the middle. You can do the same sweep for , which you can is pictured here:
We can visualize the risk in one figure with various and . Notice that the curves for and are exactly the same as we saw above.
Of course, we get a different risk depending on what loss function we use. If we repeat the whole process using loss we get the following:
What’s the point of risk? It tells us how good a decision rule is. We want a decision rule where risk is as low as possible. So you might ask, why not just choose the decision rule that minimizes ?
The answer is: because we don’t know ! How do we deal with that? Believe it or not, there isn’t a single well-agreed upon “right” thing to do, and so we meet two different schools of thought.
Bayesian statistics (don’t ask about the name) defines a “prior” distribution over . This says which values of we think are more and less likely. Then, we define the Bayesian risk as the average of over the prior:
This just amounts to “averaging” over all the risk curves, weighted by how “probable” we think is. Here’s the Bayes risk corresponding to with a uniform prior :
For reference, the risk curves are shown in light grey. Naturally enough, for each value of , the Bayes risk is just the average of the regular risks for each .
Here’s the risk corresponding to :
That’s all quite natural. But we haven’t really searched through all the decision rules, only the simple ones . For other losses, these simple ones might not be enough, and there are a lot of decision rules. (Even for this toy problem there are , since you can output heads or tails for each of , , …, .)
Fortunately, we can get a formula for the best decision rule for any loss. First, re-write the Bayes risk as
This is a sum over where each term only depends on a single value . So, we just need to make the best decision for each individual value of separately. This leads to the Bayes-optimal decision rule of
With a uniform prior , here’s the optimal Bayesian decision rules with loss :
And here it is for loss :
Look at that! Just mechanically plugging the loss function into the Bayes-optimal decision rule naturally gives us the behavior we expected– for , the rule is very hesitant to predict tails, since the loss is so high if you’re wrong. (Again, these happen to fit in the parameterized family defined above, but we didn’t use this assumption in deriving the rules.)
The nice thing about the Bayesian approach is that it’s so systematic. No creativity or cleverness is required. If you specify the data generating process () the loss function () and the prior distribution ()) then the optimal Bayesian decision rule is determined.
There are some disadvantages as well:
So, if you have little idea of your prior, and/or you’re only making a single decision, you might not find much comfort in the Bayesian guarantee.
Some argue that these aren’t really disadvantages. Prediction is impossible without some assumptions, and priors are upfront and explicit. And no method can be optimal for every single day. If you just can’t handle that the risk isn’t optimal for each individual trial, then… maybe go for a walk or something?
Frequentist statistics (Why “frequentist”? Don’t think about it!) often takes a different path. Instead of defining a prior over w, let’s take a worst-case view. Let’s define the worst-case risk as
Then, we’d like to choose an estimator to minimize the worst-case risk. We call this a “minimax” estimator since we minimize the max (worst-case) risk.
Let’s visualize this with our running example and :
As you can see, for each individual decision rule, it searches over the space of parameters to find the worst case. We can visualize the risk with as:
What’s the corresponding minimax decision rule? This is a little tricky to deal with– to see why, let’s expand the worst-case risk a bit more:
Unfortunately, we can’t interchange the max and the sum, like we did with the integral and the sum for Bayesian decision rules. This makes it more difficult to write down a closed-form solution. At least in this case, we can still find the best decision rule by searching over our simple rules . But be very mindful that this doesn’t work in general!
For we end up with the same decision rule as when minimizing Bayesian risk:
For , meanwhile, we get something slightly different:
This is even more conservative than the Bayesian decision rule. , while . That is, the Bayesian method predicts heads when it observes 2 or more, while the minimax rule predicts heads if it observes even one. This makes sense intuitively: The minimax decision rule proceeds as if the “worst” w (a small number) is fixed, whereas the Bayesian decision rule less pessimistically averages over all w.
Which decision rule will work better? Well, if w happens to be near the worst-case value, the minimax rule will be better. If you repeat the whole experiment many times with w drawn from the prior, the Bayesian decision rule will be.
If you do the experiment at some w far from the worst-case value, or you repeat the experiment many times with w drawn from a distribution different from your prior, then you have no guarantees.
Neither approach is “better” than the other, they just provide different guarantees. You need to choose what guarantee you want. (You can kind of think of this as a “meta” loss.)
For real problems, the data generating process is usually much more complex than a Binomial. The “decision” is usually more complex than predicting a coinflip– the most common decision is making a guess for the value of . Even calculating for fixed and is often computationally hard, since you need to integrate over all possible observations. In general, finding exact Bayes or minimax optimal decision rules is a huge computational challenge, and at least some degree of approximation is required. That’s the game, that’s why statistics is hard. Still, even for complex situations the rules are the same– you win by finding a decision rule with low risk.
]]>Outline:
Here’s three different views of the algorithm for a one-dimensional problem, interpolating between VI-like algorithms and MCMC-like algorithms as β goes from 0 (VI) to 1 (MCMC).
(Admittedly, this might not make that much sense at this point.)
The goal of “inference” is to be able to evaluate expectations with respect to some “target” distribution . Variational inference (VI) converts this problem into the minimization of the KL-divergence for some simple class of distributions parameterized by . For example, if is a mixture of two Gaussians (in red), and is a single Gaussian (in blue), the VI optimization in one dimension would arrive at the solution below.
Given that same target distribution , Markov chain Monte Carlo creates a random walk over . The random walk is carefully constructed so that if you run it a long time, the probability it will end up in any given state is proportional to . You can picture it as follows.
In short, VI is only an approximate algorithm, but MCMC can be very slow. In practice, the difference can be enormous– MCMC may require many orders of magnitude more time just to equal the performance of VI. This presents a user with an awkward situation where if one chooses the best algorithm for each time horizon, there’s a “gap” between when VI finishes until and when MCMC is better. Informally, you can get performance that looks like this:
Intuitively, it seems like something better should be achievable at those intermediate times.
Very roughly speaking, you can define a random walk over the space of variational distributions. Then, you trade off between how random the walk is and how random the distributions are. You arrive at something like this:
Put another way, both VI and MCMC seek “high probability” regions in , but with different coverage strategies:
It is therefore natural to define a random walk over , where we trade off between “how random” the walk is and “how much” high entropy are favored.
Yes! Or, at least, sort of. To define a bit of notation, we start with a fixed variational family and a target distribution . Now, we want to define a distribution (so we can do a random walk) so that
.
The natural goal would be to minimize the KL-divergence . That’s difficult since is defined by marginalizing out– you can’t evaluate it. What you can do is set up two upper-bounds on this quantity.
The first bound is the conditional divergence:
The second bound is the joint divergence. You need to augment with some distribution and then you have the bound
Since these are both upper-bounds, a convex combination of them will also be. Thus, the goal is to find the distribution that minimizes for any in the [0,1] interval.
First, note that depends on the choice of . You get a valid upper-bound for any choice, but the tightness changes. The paper uses where is a normalizing constant. Here, you can think of as something akin to a “base measure”. is restricted to beconstant over . (This isn’t a terrible restriction– it essentially means that if were a prior for , it wouldn’t favor any point.)
Taking that choice of the solution turns out to be:
Furthermore, the actual value of the divergence bound at the solution turns out to be just the normalizing constant up to a constant, i.e.
.
To do anything concrete, you need to look at a specific VI algorithm and a specific MCMC algorithm. The paper uses
where is noise from a standard Gaussian and is a step-size.
To get the novel algorithm in this paper, all that really needs to be done is to apply Langevin dynamics to the distribution derived above. Then, after a re-scaling, this becomes the new hybrid algorithm
Here, is the entropy of . This clearly becomes the previous VI algorithm in the limit of . It also essentially becomes Langevin with . That’s because the distribution (not yet defined!) will prefer where is highly concentrated. Thus, only the mean parameters of matter, and sampling becomes equivalent to just sampling .
Yes. First, the experiments below use a diagonal Gaussian for with and . Second, Tthe gradient of the objective involves a KL-divergence. Exactly computing this is intractable, but can be approximated with standard tricks from stochastic VI, namely data subsampling and the “reparameterization trick”. Third, needs to be chosen. The experiments below use the (improper) distribution where is a universal constant chosen for each to minimize the divergence bound with is a standard Gaussian. (If you — like I– find this displeasing, see below.)
Here’s a couple 2-D examples sampling from a “doughnut” distribution and a “three peaks” mixture distribution. Here, the samples are visualized by putting a curve at one standard deviation around the mean. Notice it smoothly becomes more “MCMC-like” as increases.
Sure, but of course in more than 2 dimensions its hard to show samples. Here are some results sampling from a logistic regression model on the classic ionosphere dataset. As a comparison, I implemented the same model with STAN and ran it a huge amount of time to generate “presumed correct” samples. I then projected all samples to the first two principal components.
(Note: technically what’s shown here is a sample being drawn from each sampled )
The top row shows the results after 10^{4} iterations, the middle row after 10^{5} and the bottom row after 10^{6} You can roughly see that for small time horizons you are better off using a lower value of but for higher time horizons you should use a larger value.
Here, you need to compare the error each value of creates at each time horizon. This is made difficult by the fact that you also need to select a step-size and the best step-size changes depending on the time and . To be as fair as possible, I ran 100 experiments with a range of step-sizes, and averaged the performance. Then, for each value of and each time horizon, the results are shown with the best timestep. (Actually, this same procedure was used to generate the previous plots of samples as well.)
The above plot shows the error (measured in MMD) on the y axis against time on the x-axis. Note that both axes are logarithmic. There are often several orders of magnitude of time horizons where an intermediate algorithm performs better than pure VI (β=0) or pure MCMC (β=1).
The most unsatisfying thing about this approach is the need to choose . This is a bit disturbing, since this is not an object that “exists” in either the pure MCMC or pure VI worlds. On the other hand, there is a strong argument that it needs to exist here. If you carefully observe above, you’ll notice that it depends on the particular parameterization of . So, e.g. if we “stretched out” part of the space of this would change the marginal density . That would be truly disturbing, but if is transformed in the opposite way, it would counteract that. So, needs to exist to reflect how we’ve parameterized .
On the other hand, simply picking a single static distribution is pretty simplistic. (Recall, was defined in terms of above) It would be natural to try to adjust this distribution during inference to tighten the bound . Using the fact that you can show that it’s possible to find derivatives of with respect to the parameters of online, and thus tighten the bound while the algorithm is running. (I didn’t want to do this in this paper since neither VI nor MCMC do this, and it complicates the interpretation of the experiments.)
Finally, the main question is if this can be extended to other pairs of VI / MCMC algorithms. I actually first derived this algorithm by looking at simple discrete graphical models, e.g. Ising models. There, you can use the algorithms:
You do in fact get a useful hybrid algorithm in the middle. However, the unfortunate reality is that both of the endpoints are considered pretty bad algorithms, so its hard to get too excited about the interpolation.
Finally, do note that there are other ideas out there for combining MCMC and VI. However, these usually fall into the camps of “putting MCMC inside of VI” [7] [8] [9] or “putting VI inside of MCMC” [10], rather than a straight interpolation of the two.
]]>Anyway, while macros may be the best feature you LyX aren’t using, I recently discovered another couple excellent ones I wasn’t familiar with after years of use so I thought I’d publicize. Specifically, I’ve always hated the process of including explanatory figures into LyX. Exporting plots from an experiment is tricky to improve, but when trying to create explanatory graphs, I’ve always hated the process.
Before, I thought the options were.
1) Create the graph in an external program. This is fine, of course, but is quite inconvenient when you want to go back and revise it. The external program usually saves it in some other format, so you have to open the graphic in open it again, revise it, export it to .pdf (or whatever), then open the document in LyX, compile it. Then, when you don’t like the way it looks, you have to repeat the whole process. It works, but it’s not efficient, since you can’t edit the content in place. (Which is the whole point of using a WYSIWYG editor in the first place– remove the need for thinking about anything but content.)
2) Write the graphics directly in LyX in a language like TikZ. This is more “in-place” in that you don’t have external files to find and manipulate. However, I find TikZ to be quite painful to get right with many re-compilations necessary. If the TikZ document is in place each requires a full compilation of the document. This is hilariously slow when making something like Beamer slides. Further, this totally violates the whole point of WYSIWYG since you’re looking at code, rather than the output.
There are better ways! I’ve wasted countless hours not being aware of these.
First, LyX has a beautiful feature of “preview boxes”. Take the following very simple TikZ code, which just draws a square:
\begin{tikzpicture}
\draw[red] (0,0) -- (0,1);
\draw[green] (0,1) -- (1,1);
\draw[red] (1,1) -- (1,0);
\draw[blue] (1,0) -- (0,0);
\end{tikzpicture}
Typically, I’d include this in LyX files by inserting a raw tex box:
And then putting the the TikZ code inside:
This is OK, but has the disadvantages from (2) above. The code can be huge, if I have a lot of graphics, I can’t tell what corresponds to what, and I have to do a (slooooooow) recompile of the whole document to see what it looks like.
However, if you just add a “preview box”:
You get something that looks like this:
So far, so pointless, right? However, when you deselect, LyX shows the graphic in-place:
You can then click on it to expand the code. This solves most of the problems: You can see what you are doing at a glance, and you don’t need to recompile the whole document to do it.
Newer versions of LyX also natively support SVG files. You first have to create the file externally using something like Inkscape, which itself saves directly to the SVG format. Then, you can include it in LyX by doing Insert->File->External Material:
And then selecting the SVG file:
Again, LyX will show it in-place and (if LyX is configured correctly…) correctly output vector graphics in the final document.
What’s even better is that LyX can automatically open the file in the external editor for you. If you right click, you can “edit externally”:
Then the external editor will automatically open the file. You can then save it with a keystroke and go back to LyX. No hunting around for the file, no cycles of exporting to other formats, and you see exactly what the final output will look like at all stages. You can really tell that LyX was created by people using it themselves.
This one is described well-enough already, but helps a lot in big documents: you can click on a point in a generated .pdf and automatically have LyX sync the editor to the corresponding point in the file.
]]>This can work OK. Let’s look at an example of trying to calculate the derivative of , using a range of different
What’s happening? Well, the result is true mathematically in the limit that is small, so it’s natural to get errors for large . However, with very small you run into trouble because floating-point arithmetic can only represent finite precision. Let’s try again with a smaller value of
That’s somewhat concerning. We can still get a nearly correct value, but we have a limited range of steps that achieve it. This is very well-known in numerical analysis, and a common solution is to use two-sided differences, i.e. to estimate
If we try that, we indeed get better results:
What’s happening here? Basically, we are using more information about the function to make a higher-order approximation, so we can mathematically get away with using a larger value of . This in turn isis helpful to avoid the numerical precision demons.
Great! But let’s go deeper. If we use , we get:
Huh. 1-sided differences totally fall apart, but we seem to be running into more trouble. But never fear, you can use “four-sided differences”!
Then, we get what you might expect:
But what if we go deeper, with ?
Or maybe we should go even deeper, with ?
Even the four-sided differences have failed us. Now, you might take a lesson here that you shouldn’t be using numerical differences (and I sort of agree). Those of certain temperament, on the other hand, would say instead that what we need is power. OK then, how do six-sided differences sound to you? Sound good?
This is still tough, but there is at least some range of epsilon where you can calculate a reasonable derivative. There is, of course, a never-ended sequence of these higher-order derivative approximations. There’s even a calculator, for all your bespoke finite-difference stencil needs.
Now, you might think that the real solution here is to use automatic differentiation, and you’re mostly right. It takes more computation to sample the function at more points, and no number of samples will fundamentally stop the numerical demons from destroying your result.
However, there still remain cases where it’s worthwhile to manually compute a derivative. More importantly perhaps, when you’re implementing an automatic differentiation tool, you still need to test that it is actually correct! Here, numerically differences will probably forever remain useful. So, certainly for the cases of building a test suite for autodiff code, it certainly makes sense to use these higher-order derivatives.
(I originally intended to leave this as a comment on Tim Viera’s post on testing gradient implementations, but this kinda got out of control.)
]]>Act 1: Magical Monkeys
Two monkeys, Alfred () and Betty () live in a parallel universe with two kinds of blocks, green () and yellow (). Alfred likes green blocks, and Betty prefers the yellow blocks. One day, a Wizard decides to give one of the monkeys the magical power to send one block over to our universe each day.
The Wizard chooses the magical monkey by flipping a fair four-sided die. He casts a spell on Alfred if the outcome is 1, and Betty if the outcome is . That is, the Wizard chooses Alfred with probability and Betty with probability . Both Alfred and Betty send their favorite colored block with probability
After the Wizard has chosen, we see the first magical block, and it is green. Our problem is: What is the probability that Alfred is the magical monkey?
Intuitively speaking, we have two somewhat contradictory pieces of information. We know that the Wizard chooses Betty more often. But green is Alfred’s preferred color. Given probabilities above, is Alfred or Betty more probable?
First, we can write down all the probabilities. These are
Now, the quantity we are interested in is We can mechanically apply Bayes’ rule to obtain
Similarly, we can calculate that
But, of course, we know that one of the monkeys is magical, so these quantities must sum to one. Thus (since they both involve the quantity that we haven’t explicitly calculated) we can normalize to obtain that
Act 2: Magical Monkeys on Multiple Days
We wait around two more days, and two more blocks appear, both of which are yellow. Assume that the monkeys make an independent choice each day to send their favorite or less favorite block. Now, what is the probability that Alfred is the magical monkey, given that ?
Intuitively speaking, what do we expect? Betty is more likely to be chosen, and 2/3 of the blocks we’ve seen are suggestive of Betty (since she prefers yellow).
Now, we can mechanically calculate the probabilities. This is just like before, except we use the fact that the blocks seen on each day are conditionally independent given a particular monkey.
Again, we know that these two probabilities sum to one. Thus, we can normalize and get that
So now, Betty looks more likely by far.
Act 3: Magical Monkeys and the Weather
Now, suppose the Wizard gives us some additional information. The magical monkey (whilst relaxing over in the other universe) is able to perceive our weather, which is either clear or rainy . Both the monkeys prefer clear weather. When the weather is clear, they send their preferred block with probability , while if the weather is rainy, they angrily send their preferred block with probability That is, we have
Along with seeing the previous sequence of blocks , we observed the weather sequence . Now, what is the probability that Alfred is the magical monkey?
We ask the Wizard what the distribution over rainy and clear weather is. The wizard haughtily responds that this is irrelevant, but does confirm that the weather is independent of which monkey was made magical.
Can we do anything without knowing the distribution over the weather? Can we calculate the probability that Alfred is the magical monkey?
Let’s give it a try. We can apply Bayes’ equation to get that
Now, since the weather is independent of the monkey, we know that
Thus, we have that
Through the same logic we can calculate that
Again, we know that these probabilities need to sum to one. Since the factor is constant between the two, we can normalize it out. Thus, we get that
Again, it looks like Alfred was the more likely monkey. And– oh wait– we somehow got away with not knowing the distribution over the weather…
Act 4: Predicting the next block
Now, suppose that after seeing the previous sequences (namely and ) , on the fourth day, we find that it has rained. What is the probability that we will get a green block on the fourth day? Mathematically, we want to calculate
How can we go about this? Well, if we knew that the magical monkey was Alfred (which we do not!), it would be easy to calculate. We would just have
and similarly if the magical monkey were Betty. Now, we don’t know which monkey is magical, but we know the probabilities that each monkeys is magical given the available information– we just calculated them in the previous section! So, we can factorize the distribution of interest as
So we are slightly less likely than even to see a green block on the next day.
Epilogue: That’s Bayesian Inference
This is how Bayesian inference works (in the simplest possible setting). In general Bayesian inference, you have:
Bayesian inference essentially proceeds in two steps:
where indexes the different days of observation.
In the general case, the set of possible models is much larger (typically infinite) and more complex computational methods need to be used to integrate over it. (Commonly Markov chain Monte Carlo or variational methods). Also, of course, we don’t typically have a Wizard telling us exactly what the prior distribution over the models is, meaning one must make assumptions, or try to “learn” the prior as in, say, empirical Bayesian methods.
]]>1) Sample complexity, convergence
How much predictive power is the algorithm able to extract from a given number of examples?
All else being equal, if algorithm A with N examples behaves the same as algorithm B with 2N examples, we would prefer algorithm A. This can vary in importance depending on how scarce or expensive data is.
2) Speed
How quickly does the algorithm run?
Obviously, a tool is more useful if it is faster. However, if that faster algorithm comes at the expense of sample complexity, one would need to measure the expense of running longer against the expense of gathering more data.
3) Guarantees
Does the algorithm just have good performance (along whatever dimension) in practice, or is it proven? Is it always good, or only in certain situations? How predictable is the performance?
When SVMs showed up, it looked like good practical algorithms came from good theory. These days, it seems clear that powerful intuition is also an excellent source of practical algorithms (deep learning).
Perhaps theory will some day catch up, and the algorithm with the best bounds will coincide with the best practical performance. However, this is not always the case today, where we often face a trade-offs between algorithms with good theoretical guarantees, and good empirical performance. Some examples are 1) Upper-Confidence Bounds versus Thomson Sampling with bandit algorithms [Update June 2018: Thompson sampling has largely caught up!] 2) Running a convex optimization algorithm with a theoretically derived Lipschitz constant versus a smaller one that still seems to work and 3) Doing model selection via VC-dimension generalization bounds versus using K-fold cross-validation.
4) Memory usage
How much space does the algorithm need?
I work with a lot of compute-heavy applications where this is almost a wall: we don’t care about memory usage until we run out of it, after which we care a great deal. With simpler algorithms and larger datasets, this is often more nuanced, with concerns about different cache sizes.
5) Handholding
Can it be used out of the box, or does it require an expert to “tune” it for best performance?
A classic example of this is stochastic gradient descent. In principle, for a wide range of inputs, convergence is guaranteed by iteratively setting where is a noisy unbiased estimate of the gradient at and is some sequence of step-sizes that obeys the Robbins-Monro conditions [1]. However, in practice, the difference between, say, , and can be enormous, and finding these constants is a bit of a dark art.
[1] ( and .)
6) Implementability
How simple is the algorithm? How easy is it to implement?
This is quite complex and situational. These days, I’d consider an algorithm that consists of a few moderate-dimensional matrix multiplications or singular value decompositions “simple”. However, that’s due to the huge effort that’s been devoted to designing reliable matrix algorithms, and the ubiquity of easy to use libraries.
7) Amenability to parallelization
Does the algorithm lend itself to parallelization? (And what type of parallelization?)
And if it does, under what model? Map-reduce, GPU, MPI, and openMP all have different properties.
8) “Anytime-ness”
Can the algorithm be implemented in any anytime manner?
That is, does the algorithm continuously return a current “best-guess” of the answer that is refined over time in a sensible manner? This can help diagnose problems before running the full algorithm, enormously useful in debugging large systems.
Note this is distinct from being an online algorithm, which I’m not mentioning here, since it’s a mix of speed and memory properties.
9) Transparency, interpretability
Can the final result be understood? Does it give insight into how predictions are being made?
Galit Shmueli argues that “explanatory power” and “predictive power” are different dimensions. However, there are several ways in which interpretability is important even when prediction is the final goal. Firstly, the insight might convince a decision maker that the machine learning system is reliable. Second, this can be vital in practice for generalization. The world is rarely independent and identically distributed. If a domain expert can understand the predictive mechanism, they may be able to assess if this will still hold in the future, or captures something true only in the training period. Third, understanding what the predictor is doing also often yields ways to improve it. For example, the adjacent visualization of the outputs of a decision tree in two-dimensions suggests the need for non axis-aligned splits.
10) Generality
What class of problems can the algorithm address?
All else being equal, we prefer an algorithm that can, say, optimize all convex losses over one that can only fit the logistic loss. However, this often comes at a cost— a more general-purpose algorithm cannot exploit the extra structure present in a specialized problem (or, at least, has more difficulty doing so). It’s instructive how many different methods are used by LIBLINEAR depending on the particular loss and regularization constant.
11) Extendability
Does the algorithm have lots of generalizations and variants that are likely to be interesting?
More of an issue when reviewing a paper than deciding what is the final best algorithm to use once all the dust has settled.
12) Insight
Does the algorithm itself (as opposed to its results) convey insight into the problem?
Gradient descent for maximum likelihood learning of an exponential family is a good example, as it reveals the moment-matching conditions. Insight of this type, however, doesn’t suggest that one should actually run the algorithm.
13) Model robustness
How does the performance of the algorithm hold up to violations of its modeling assumptions?
Suppose we are fitting a simple line to some 2-D data. Obviously, all else being equal, we would prefer an algorithm that still does something reasonable when the actual dependence when the expected value of is not linear in . The example I always harp on here is the pseudolikelihood. The original paper pointed out that this will have somewhat worse sample complexity that the full likelihood, and (much!) better time complexity. Many papers seem to attribute the bad performance of the pseudolikelihood in practice to this sample complexity, when the true cause is that the likelihood does something reasonable (minimizes KL divergence) when there is model mis-specification, but the pseudolikelihood does not.
—
I often ponder is how much improvement in one dimension is enough to “matter”. Personally, I would generally consider even a small constant factor (say 5-10%) improvement in sample complexity quite important. Meanwhile, it would be rare to get exited about even, say, a factor of two improvement in running time.
What does this reflect? Firstly, generalization is the fundamental goal of data analysis, and we are likely to be willing to compromise most things if we can really predict better. Second, we instinctively distrust running times. Theory offers few tools for understanding constant factors, as these are highly architecture and implementation dependent. In principle, if one could be completely convincing that a factor of two improvement was truly there, I think this probably would be significant. (This might be true, say, if all algorithms are bottlenecked by a few matrix multiplications, and a new algorithm provably reduces the number needed.) However, this is rare.
In some places, I think constant factor skepticism can lead us astray. In reality, a factor of 30 improvement in speed is probably better than changing a complexity to . (Calculate such that .) This is particularly true when the lower-complexity algorithm has higher constant factors. As an example, I’ve always found the algorithm for projection onto the ball to be faster than the algorithm in practice.
—
Given that there are so many dimensions, can a new algorithm really improve on all simultaneously? It seems rare to even improve on a single dimension without a corresponding cost somewhere else. There will often be a range of techniques that are Pareto optimal, depending on one’s priorities. Understanding this range is what makes having an expert around so important.
Ideally, as a community, we would be able to provide a “consumer” of machine learning an easy way to find the algorithm for them. Or, at least, we might be able to point them in the direction of possible algorithms. One admirable attempt along this line is the following table from The Elements of Statistical Learning:
(Incidentally, notice how many of the desiderata do not overlap with mine.)
Some other situations show a useful contrast. For example, take this decision tree for choosing an algorithm for unconstrained optimization, due to Dianne O’Leary:
Essentially, this amounts to the principle that one should use the least general algorithm available, so that it can exploit as much structure of the problem as possible. Though one can quibble with the details (pity the subgradient methods) it at least comes close to giving the “right” algorithm in each situation.
This doesn’t seem possible with machine learning, since there doesn’t exist a single hierarchy Rather, ML problems are a tangle of model specification, computational and architecture requirements, implementation constraints, user risk-tolerances and so on. It won’t be easy to automate away the experts. (Even ignoring the possible misalignment of incentives in that the field has the domain of automating itself.)
(This post incorporates comments from Aditya Menon and Cheng Soon Ong.)
]]>