Take some convex optimization problem:
Here, is some convex set. If is hard to characterize, this optimization problem might be hard to handle in a direct fashion. For example, might require enforcing some exponential number of constraints. In some problems, it nevertheless turns out to be possible to solve the following problem:
That is, for linear objective functions, the problem is solvable. (This might be, for example, by some graph algorithm). Anyway, the Conditional-Gradient method solves the original problem by repeatedly relinearizing it, and solving the linear optimization. One can then take a step to some point between the current position, and the solution to the linearized problem.
3) , for some .
In step 3, controls how far moves towards . Notice that because is convex, for any ,
The information I can find doesn’t make any argument for convergence. Clearly, never gets worse, so (assuming the objective is bounded below) the algorithm will converge, but I haven’t found what under conditions it converges to the optimal value, or at what rate.