Two early lossless compression techniques are Run Length Encoding and Huffman Encoding. Run Length encoding replaces long runs of identical bit configurations with a flag that indicates that the following data is compressed; a set of bits to be repeated; and a count of the number of repetitions. Huffman encoding was developed in the 1950s. It uses short codes for frequently used bit combinations, longer codes for less frequently used bit combinations.
In the 1980s, dictionary based lossless encoders were developed. The best known of these is probably Lemple-Zip-Welch (LZW) encoding. Dictionary encoders build a dictionary of previously seen strings of bits as they move through the data. Bit strings are replaced by their dictionary indices. The dictionary -- which may be quite lengthy -- is not sent with the data. It is reconstructed by the decoder as it moves through the data.
Lossy data compression schemes are generally applied to sensory data such as video and audio that include a great deal of redundancy. They attempt to reduce data volume by discarding data that would not be distinguished by a listener or viewer. Typically they use a priori knowledge of what users can distinguish and/or describe the data using a mathematical algorithm such as the Discrete Cosine Transform or Fractal encoding that describes the data in a series of terms and discards those terms that appear to have little potential meaning.
Return To Index Copyright 1994-2008 by Donald Kenney.