# Random Image Segmentation

I’ve been playing around today with an image decomposition method. Given some observed image $\bf y$, one seeks an image $\bf x$ so as to minimize $\sum_i (x_i - y_i)^2 + \lambda \sum_{(i,j)} |x_i-x_j|$

Where the first sum is over all pixels $i$, and the second sum is over all neighboring pairs of pixels $(i,j)$ (For simplicity here, I imagine that images are just long vectors, and the grid structure is only present in what pairs are considered neighbors). This minimization has the interesting property that, at a solution, neighboring pixels in $\bf x$ often have the same value. Thus, one can see the optimization as segmenting the image into piecewise constant regions.

I tried using the (excellent) CVX toolbox to do the optimization. Though this worked, it was quite slow. I then derived a little iteratively reweighted least-squares algorithm that is much faster.

(Update Jan 9, 2009: Code for doing the optimization is here.)

Anyway, the standard use of the decomposition is for “denoising”. Here is an example from an image in the Street Scenes database. At top is the original image $\bf y$, followed by the result $\bf x$. At the bottom, I use a random colors for each segment to give a better idea of the discontinuities.

Always, intensities are in the 0-1 interval, and only immediate horizontal and vertical pairs are considered neighbors. $\lambda=1$  $\lambda=1$ However, more interesting (to me) images come from segmenting random images.

Uniform noise, $\lambda=0.1$ Uniform noise, $\lambda=0.2$ Uniform noise, $\lambda=0.3$ Uniform noise, $\lambda=0.4$ Uniform noise, $\lambda=0.5$ Uniform noise, $\lambda=0.6$ Gaussian noise, variance $\sigma^2=.1$, $\lambda=0.3$ Gaussian noise, variance $\sigma^2=.1$, $\lambda=0.4$ Gaussian noise, variance $\sigma^2=.1$, $\lambda=0.5$ 