# Conditional Gradient Method

Take some convex optimization problem:

minimize

subject to

Here, is some convex set. If is hard to characterize, this optimization problem might be hard to handle in a direct fashion. For example, might require enforcing some exponential number of constraints. In some problems, it nevertheless turns out to be possible to solve the following problem:

minimize

subject to

That is, for linear objective functions, the problem is solvable. (This might be, for example, by some graph algorithm). Anyway, the Conditional-Gradient method solves the original problem by repeatedly relinearizing it, and solving the linear optimization. One can then take a step to some point between the current position, and the solution to the linearized problem.

Initialize anywhere.

Repeat:

1)

2)

3) , for some .

In step 3, controls how far moves towards . Notice that because is convex, for any ,

The information I can find doesn’t make any argument for convergence. Clearly, never gets **worse**, so (assuming the objective is bounded below) the algorithm will *converge*, but I haven’t found what under conditions it converges *to the optimal value*, or at what *rate*.

### Like this:

Like Loading...

Convergence rate of the method is sublinear unfortunately. This is shown in Bertsekas’s book on nonlinear optimization.

Why not approximate with a quadratic function as in SQP? Then, under certain conditions we may obtain locally linear convergence – or even superlinear under additional assumptions.