Project Nayuki

Huffman coding (Java)

This project is an open-source reference implementation of Huffman coding in Java. The code is intended to be used for study, and as a solid basis for modification and extension. As such, it is optimized for clear logic and low complexity, not speed/memory/performance.

Source code

Browse the full source code at GitHub:

Or download a ZIP of all the files:

(Mirror at Eigenstate: browse or download)

The code is open source under the MIT License. See the readme file for details.


Huffman encoding takes a sequence (stream) of symbols as input and gives a sequence of bits as output. The intent is to produce a short output for the given input. Each input yields a different output, so the process can be reversed, and the output can be decoded to give back the original input.

In this software, a symbol is a non-negative integer. The symbol limit is one plus the highest allowed symbol. For example, a symbol limit of 4 means that the set of allowed symbols is {0, 1, 2, 3}.

Submodule dependency graph

The following explains all the submodules in the software package:

Sample applications

Two pairs of command-line programs fully demonstrate how this software package can be used to encode and decode data using Huffman coding. One pair of programs is the classes HuffmanCompress and HuffmanDecompress, which implements static Huffman coding. The other pair of programs is the classes AdaptiveHuffmanCompress and AdaptiveHuffmanDecompress, which implements adaptive/dynamic Huffman coding.


The classes HuffmanEncoder and HuffmanDecoder implement the basic algorithms for encoding and decoding a Huffman-coded stream. The code tree must be set before encoding or decoding. The code tree can be changed after encoding or decoding each symbol, as long as the encoder and decoder have the same code tree at the same position in the symbol stream. At any time, the encoder must not attempt to encode a symbol that is not in the code tree.

Code tree model
Huffman code tree

The class CodeTree, along with Node, InternalNode, and Leaf, represent a Huffman code tree. The leaves represent symbols. The path to a leaf represents the bit string of its Huffman code.

Frequency table

The class FrequencyTable is a simple integer array wrapper that counts symbol frequencies. It is also responsible for generating a Huffman code tree that is optimal for its current array of frequencies (but not necessarily canonical).

Canonical codes

The class CanonicalCode converts an arbitrary CodeTree to a canonical code. It can then generate a CodeTree for the canonical code.

Bitwise I/O streams

The classes BitInputStream and BitOutputStream are bit-oriented I/O streams, analogous to the standard bytewise I/O streams. However, since they use an underlying bytewise I/O stream, the bit stream’s total length is always a multiple of 8 bits.


  • All the Huffman-related code only works with alphabets with up to Integer.MAX_VALUE - 1 (i.e. 231 − 2) symbols.

  • FrequencyTable can only track each symbol’s frequency up to Integer.MAX_VALUE. Trying to increment a symbol’s frequency beyond this limit raises an exception. However, using frequencies larger than 231 should be rare in practice.

  • The code is optimized for understandability, not performance. One consequence is that CodeTree uses memory grossly inefficiently to store the code bit string for each symbol. It uses ArrayList<Integer> at a cost of at least 4 bytes per represented bit, instead of packing bits into a primitive integral array type such as byte[].

  • CodeTree, FrequencyTable, and CanonicalCode explicitly take a symbol limit as a parameter. This is not strictly required, but the alternative is to use sparse tables, which is more difficult to understand.

  • A couple of methods use recursion to traverse a whole code tree. When using such a method on a deep tree, a StackOverflowError may be thrown.


Here are some suggestions on how to use, modify, or extend this software:

  • Extract an interface from the bitwise I/O streams and create other concrete implementation classes.

  • Improve the speed of Huffman encoding and decoding. One advanced suggestion is to treat the Huffman decoder as a finite state machine (FSM) and decode a whole byte per iteration.

  • Layer another encoding scheme on top of Huffman coding, such as RLE, LZW, n-gram models, DCT with quantization, etc. This is essentially how real-world compressed data formats are structured.

More info