# Lagrange duality

This page just lists the basic properties for reference, and does not give any intuition or proofs. This is based on chapter 5 of Boyd and Vandenberghe’s (outstanding) Convex Optimization. I have emphasized the terminology here because this is necessary to read most work using Lagrange duality.

Take an optimization problem:

minimize $f_0(x)$
subject to $f_i(x) \leq 0$, $h_i(x) =0$

The Lagrangian is

$L(x,\lambda,\nu)=f_0(x) + \sum_i \lambda_i f_i(x) + \sum_i \nu_i h_i(x)$

The Lagrange dual function is

$g(\lambda,\nu) = \inf_x L(x,\lambda,\nu)$.

The Lagrange dual function gives lower bounds on the optimal value $p^*$ of the original problem.

$\forall \nu, \forall \lambda \geq 0, g(\lambda,\nu) \leq p^*$

(This is very easy to check: if $\hat x$ is a feasible solution to the original problem, observe that $L({\hat x},\lambda,\nu) \leq f_0({\hat x})$)

Whenever we bound some quantity with regards to some adjustable parameters, the obvious idea is to adjust the parameters so as to make the bound as good as possible.  This is the Lagrange dual problem.

maximize $g(\lambda,\nu)$
subject to $\lambda \geq 0$.

Call the optimal value of the Lagrange dual problem $d^*$. It is obvious that

$d^* \leq p^*$

always, called weak duality. When is $d^* = p^*$, i.e. when does strong duality hold? The answer is that is usually does when the original problem is convex. (i.e. $f_0, f_i$ convex, $h_i$ linear).

Slater’s theorem removes the “usually”. Strong duality holds if the original problem is convex, and there exists a strictly feasible point, i.e. there exists an $x$ such that the inequalities are strictly satisfied:

$f_i(x) < 0, h_i(x)=0$.

Actually, a slightly stronger result holds: if some $f_j$ are affine, only $f_j(x) \leq 0$ is necessary for those constraints.