NeuQuant: Fast High-Quality Image Quantization

The NeuQuant Neural-Net image quantization algorithm (© Anthony Dekker 1994) is a replacement for the common Median Cut algorithm. It is described in the article Kohonen neural networks for optimal colour quantization in Volume 5, pp 351-367 of the journal Network: Computation in Neural Systems, Institute of Physics Publishing, 1994 (PDF version available).

The C code for the algorithm and the matching '.h' include file are both available. There is also a Java version which includes some improvements, together with a test driver.

Frequently Asked Questions

See FAQ at bottom of page.

What does the algorithm do?

The algorithm performs quantization of 24-bit colour images to e.g. 8-bit colour. By adjusting a sampling factor, the network can either produce extremely high-quality images slowly, or produce good images in reasonable times. With a sampling factor of 1, the entire image is used in the learning phase, while with a factor of e.g. 10, a pseudo-random subset of 1/10 of the pixels are used in the learning phase. A sampling factor of 10 gives a substantial speed-up, with a small quality penalty. Careful coding and a novel indexing scheme are used to make the algorithm efficient. This confounds the frequent observation that Kohonen Neural Networks are necessarily slow.

The algorithm operates using a one-dimensional self-organizing Kohonen Neural Network, typically with 256 neurons, which self-organises through learning to match the distribution of colours in an input image. Taking the position in RGB-space of each neuron gives a high-quality colour map in which adjacent colours are similar. This map is then used to quantise the image.

Network Image

The image above is an example neural network for the test image shown at the bottom of this page. The colours along the network show the RGB values inside the cubical RGB-space. The corresponding colour map (with the colours shown in sequence along the neural network) is:

Colour Map

As an alternative viewpoint, this diagram overlays the neural network on the distribution of colours in the test image at the bottom of this page (this was the most challenging image from our test suite, and performance was not quite as good as for photographic images, such as that shown in the paper):

Colour Cube

It can be seen in this diagram that some colours in the image have no close match in the neural network, but these are very rare colours (pale yellow, red, and blue) only occurring in tiny highlights (see the test images below). Tiny distortions in these highlights introduced by quantization are not noticeable, although examination of the enlargements in the test images shows that NeuQuant does nevertheless minimise these distortions (compared with the other methods).

How good is the algorithm?

Very good! We use two quality measures: the average error per pixel, and the average error for 2x2 blocks centred on each pixel (which takes into account dithering). We measure speed using the number of seconds to quantise one million pixels on a (very old) SUN SPARC workstation. We compare with two versions of the common Median Cut algorithm: one is fast and dithered, the other is slower, better, and non-dithered. Results are as follows:

  1. NeuQuant with a sampling factor of 1: time=875, average error=5.34, block error=2.96.
  2. NeuQuant with a sampling factor of 10: time=119, average error=5.97, block error=3.71.
  3. Fast dithered Median Cut: time=36, average error=10.39, block error=4.20.
  4. Slower, improved Median Cut: time=122, average error=9.41, block error=5.94.
Visually the dithered Median Cut algorithm suffers from grain, while the slower Median Cut algorithm suffers from banding artifacts. NeuQuant with a sampling factor of 10 is slightly faster than the slower Median Cut, but with much higher quality. In addition, the space usage for NeuQuant is 8kB, plus the space needed for the image (as compared to 432kB plus the image for Median Cut). This means NeuQuant will benefit significantly from the use of a processor with a copy-back cache.

Compare the following test images. Each image contains an enlargement in the top left which was added after quantization. Don't rely on the thumbnail images here for a comparison: click on them to fetch the complete images, and compare them with a viewer that uses the image colour maps.

NeuQuant with a sampling factor of 1

Time=875, average error=5.34, block error=2.96 (Highest quality).


NeuQuant with a sampling factor of 10

Time=119, average error=5.97, block error=3.71 (Good compromise).


Fast Dithered Median Cut

Time=36, average error=10.39, block error=4.20 (Grainy image).


Slower, Improved Median Cut

Time=122, average error=9.41, block error=5.94 (Banding artifacts).


Original Image

A PNG version and a (lossy) JPEG version of the original test image are also available.

Frequently Asked Questions

Who wrote NeuQuant?

Anthony (Tony) Dekker.

Is there an electronic version of the paper?

Yes! Download PDF here.

What about a paper version?

Well, please make every effort to get the journal article by inter-library loan (Kohonen neural networks for optimal colour quantization in Volume 5, pp 351-367 of the journal Network: Computation in Neural Systems, Institute of Physics Publishing, 1994), or download the PDF version. If this fails, and it's really urgent, please send me your postal address, and I'll mail you a copy.

The algorithm doesn't work for 8 (or 4, or 16) colours

Well, it wasn't meant to. It was intended to produce 6 to 8-bit images (64 to 256 colours) from 24-bit images. But one technique that works is:

  1. Run the algorithm with N=100 (or perhaps 30) colours and a sampling factor of 10 (or perhaps 30).
  2. Create an NxN matrix of the distances between the colours.
  3. Repeatedly find the two most similar colours, and eliminate one, until only the desired number of colours are left.
  4. For each image pixel, find the best-matching colour in this set (dithering will probably be required).

The FreeImage port doesn't work

The code as written assumes that pixels are not word aligned, while FreeImage assumes that they are. See the fixes below.

Do you know of any similar work?

Yes, see:

Are people using NeuQuant?

Yes indeed, for example:

Do I need to pay a license fee to use NeuQuant?

No, but you must include the copyright notice at the top of the program in the source code and documentation of the program using NeuQuant. See the copyright notice at the top of the program for more details.

What about routines to read and write image formats?

I'm afraid you'll have to supply your own, based on your preferred formats. There is plenty of free code around the net. See the '.h' include file for a skeleton main program, and Jürgen Weigert's netpbm wrapper for example code.

What about the main program for the C code?

I'm afraid you'll have to supply your own (I don't know what image formats you want to handle). See the question about image formats above.

How do I fix the code to handle 4-byte boundary alignment?

It's easy. Either do it in image load/store, or another way is:

  1. Fix the the calculation of samplepixels in learn().
  2. Fix the pixel access code in learn().
  3. Fix code to update p in learn().
  4. Fix the loop that calls inxsearch(b,g,r).

Can the code be adapted for the CIE Lab color space?

Sure! Most of the algorithm just operates on triples of sub-pixels, and doesn't care whether they are RGB or something else. The only required change is in the inxbuild() and inxsearch() functions, which index the colour map on the green sub-pixels first (relevant lines of code are marked), because the human eye is more sensitive to green. In Lab space, indexing on L first might be more appropriate.

I tried it, and the output looks terrible!

This is probably a bug in your main program. The C code assumes that writecolourmap(f) is called before inxbuild() (which re-sorts the colour map for increased efficiency). Fix your main program to do this. If you don't wish to do it that way, then inxsearch(b,g,r) must be rewritten, in a similar way to the Java version.

Have you done other work with Kohonen neural networks?

Yes, they are very powerful. See, for example: