The NeuQuant Neural-Net image quantization algorithm (© Anthony Dekker 1994) is a replacement for the common Median Cut algorithm. It is described in the article Kohonen neural networks for optimal colour quantization in Volume 5, pp 351-367 of the journal Network: Computation in Neural Systems, Institute of Physics Publishing, 1994 (PDF version available).
The C code for the algorithm and the matching '.h' include file are both available. There is also a Java version which includes some improvements, together with a test driver.
See FAQ at bottom of page.
The algorithm performs quantization of 24-bit colour images to e.g. 8-bit colour. By adjusting a sampling factor, the network can either produce extremely high-quality images slowly, or produce good images in reasonable times. With a sampling factor of 1, the entire image is used in the learning phase, while with a factor of e.g. 10, a pseudo-random subset of 1/10 of the pixels are used in the learning phase. A sampling factor of 10 gives a substantial speed-up, with a small quality penalty. Careful coding and a novel indexing scheme are used to make the algorithm efficient. This confounds the frequent observation that Kohonen Neural Networks are necessarily slow.
The algorithm operates using a one-dimensional self-organizing Kohonen Neural Network, typically with 256 neurons, which self-organises through learning to match the distribution of colours in an input image. Taking the position in RGB-space of each neuron gives a high-quality colour map in which adjacent colours are similar. This map is then used to quantise the image.
The image above is an example neural network for the test image shown at the bottom of this page. The colours along the network show the RGB values inside the cubical RGB-space. The corresponding colour map (with the colours shown in sequence along the neural network) is:
As an alternative viewpoint, this diagram overlays the neural network on the distribution of colours in the test image at the bottom of this page (this was the most challenging image from our test suite, and performance was not quite as good as for photographic images, such as that shown in the paper):
It can be seen in this diagram that some colours in the image have no close match in the neural network, but these are very rare colours (pale yellow, red, and blue) only occurring in tiny highlights (see the test images below). Tiny distortions in these highlights introduced by quantization are not noticeable, although examination of the enlargements in the test images shows that NeuQuant does nevertheless minimise these distortions (compared with the other methods).
Very good! We use two quality measures: the average error per pixel, and the average error for 2x2 blocks centred on each pixel (which takes into account dithering). We measure speed using the number of seconds to quantise one million pixels on a (very old) SUN SPARC workstation. We compare with two versions of the common Median Cut algorithm: one is fast and dithered, the other is slower, better, and non-dithered. Results are as follows:
Compare the following test images. Each image contains an enlargement in the top left which was added after quantization. Don't rely on the thumbnail images here for a comparison: click on them to fetch the complete images, and compare them with a viewer that uses the image colour maps.
Time=875, average error=5.34, block error=2.96 (Highest quality).
Time=119, average error=5.97, block error=3.71 (Good compromise).
Time=36, average error=10.39, block error=4.20 (Grainy image).
Time=122, average error=9.41, block error=5.94 (Banding artifacts).
A JPEG version of the original test image is also available.
Anthony (Tony) Dekker.
Yes! Download PDF here.
Well, please make every effort to get the journal article by inter-library loan (Kohonen neural networks for optimal colour quantization in Volume 5, pp 351-367 of the journal Network: Computation in Neural Systems, Institute of Physics Publishing, 1994), or download the PDF version. If this fails, and it's really urgent, please send me your postal address, and I'll mail you a copy.
Well, it wasn't meant to. It was intended to produce 6 to 8-bit images (64 to 256 colours) from 24-bit images. But one technique that works is:
The code as written assumes that pixels are not word aligned, while FreeImage assumes that they are. See the fixes below.
Yes indeed, for example:
No, but you must include the copyright notice at the top of the program in the source code and documentation of the program using NeuQuant. See the copyright notice at the top of the program for more details.
I'm afraid you'll have to supply your own, based on your preferred formats. There is plenty of free code around the net. See the '.h' include file for a skeleton main program, and Jürgen Weigert's netpbm wrapper for example code.
I'm afraid you'll have to supply your own (I don't know what image formats you want to handle). See the question about image formats above.
It's easy. Either do it in image load/store, or another way is:
Sure! Most of the algorithm just operates on triples of sub-pixels, and doesn't care whether they are RGB or something else. The only required change is in the inxbuild() and inxsearch() functions, which index the colour map on the green sub-pixels first (relevant lines of code are marked), because the human eye is more sensitive to green. In Lab space, indexing on L first might be more appropriate.
This is probably a bug in your main program. The C code assumes that writecolourmap(f) is called before inxbuild() (which re-sorts the colour map for increased efficiency). Fix your main program to do this. If you don't wish to do it that way, then inxsearch(b,g,r) must be rewritten, in a similar way to the Java version.
Yes, they are very powerful. See, for example: