Project Nayuki


Reed–Solomon error-correcting code decoder

Introduction

Reed–Solomon codes allow an arbitrary message to be expanded with redundant information, transmitted over a noisy channel, and decoded such that if the received message has fewer errors than a predefined number, then the original message can be recovered perfectly. This makes RS codes useful for protecting information integrity on noisy media such as radio waves, telephone lines, magnetic disks, flash memory, etc.; all of the data can be reconstructed even if there are a few errors.

On this page I present math and code to implement both Reed–Solomon ECC encoding and decoding. Although the code is not terribly long, the math behind it is not obvious, so the all major derivations will be explained here to justify the code. The math prerequisites are elementary algebra, polynomial arithmetic, linear algebra, and finite field arithmetic.

Contents


Preliminaries

  1. The Reed–Solomon procedures take place within the framework of a user-chosen field \(F\). The field is usually \(\text{GF}(2^8)\) for convenient byte-oriented processing on computers, but it could instead be \(\text{GF}(2^4)\), \(\text{GF}(2^{12})\), \(\mathbb{Z}_{73}\), etc. We need a primitive element / generator \(α\) for the field. The generator must be such that the values \(\{α^0, α^1, α^2, ..., α^{|F|-2}\}\) are all unique and non-zero; hence the powers of \(α\) generate all the non-zero elements of the field \(F\). (\(|F|\) is the size of the field, i.e. the total number of distinct elements/values.)

  2. We choose \(k\) to be the message length. \(k\) is an integer such that \(1 ≤ k < |F|\). Each message to be encoded is a sequence/block of \(k\) values from the field \(F\). For example we might choose \(k = 25\) so that each codeword conveys \(25\) payload (non-redundant) values. (The case \(k = 0\) is obviously degenerate because it means there is no useful information to convey.)

  3. We choose \(m\) to be the number of error correction values by which to expand the message. \(m\) is an integer such that \(1 ≤ m < |F| - k\). (The case \(m = 0\) is degenerate because it means no RS ECC is added, so the entire message has no protection whatsoever.) Note that when a message is expanded with \(m\) error correction values, the expanded message (a.k.a. codeword) can tolerate up to \(\left\lfloor m/2 \right\rfloor\) errors and still be decoded perfectly. For example, adding 6 EC values will allow us to fix any codeword with up to 3 erroneous values.

  4. We define \(n = k + m\) to be the block size / codeword length after encoding. Note that \(n\) is a positive integer that satisfies \(2 ≤ n < |F|\). So if we want big blocks with a lot of error-correcting capability, then we need a sufficiently large field as the foundation.

Systematic encoder

  1. Reed–Solomon error-correcting codes come in a number of flavors, of equivalent error-correcting power but different pragmatic handling. The variant that we use is the BCH view with systematic encoding, which means that the original message is treated as a sequence of coefficients for a polynomial, and the codeword after encoding is equal to the message with some error-correcting data appended to it.

  2. Define the generator polynomial based on \(m\) and \(α\):

    \(\begin{align} g(x) &= \displaystyle \prod_{i=0}^{m-1} (x - α^i) \\ &= (x - α^0) (x - α^1) \cdots (x - α^{m-1}). \end{align}\)

    This polynomial has degree \(m\), and its leading coefficient is equal to \(1\).

  3. Suppose the original message is the sequence of \(k\) values \((M_0, M_1, ..., M_{k-1})\), where each \(M_i\) is an element of field \(F\). Define the original message polynomial by simply using the values as monomial coefficients:

    \(\begin{align} M(x) &= \displaystyle \sum_{i=0}^{k-1} M_i x^i \\ &= M_0 x^0 + M_1 x^1 + \cdots + M_{k-1} x^{k-1}. \end{align}\)

  4. Define and calculate the Reed–Solomon codeword being sent as the message shifted up minus the message polynomial modulo the generator polynomial:

    \(s(x) = M(x) x^m - [(M(x) x^m) \text{ mod } g(x)].\)

    Note that the remainder polynomial \([(M(x) x^m) \text{ mod } g(x)]\) has a degree of \(m-1\) or less, so the monomial terms of the remainder don’t interact with the terms of \((M(x) x^m)\). Overall, \(s(x)\) has degree \(n-1\) or less.

    By construction, the sent codeword polynomial \(s(x)\) has the property that \(s(x) ≡ 0 \text{ mod } g(x)\); this will be useful shortly in the decoder.

  5. We encode \(s(x)\) into a sequence of values in the straightforward way by breaking it up into \(n\) monomial coefficients:

    \(\begin{align} s(x) &= \displaystyle \sum_{i=0}^{n-1} s_i x^i \\ &= s_0 x^0 + s_1 x^1 + \cdots + s_{n-1} x^{n-1}. \end{align}\)

    The codeword we transmit is simply the sequence of \(n\) values \((s_0, s_1, \ldots, s_{n-1})\), where each value \(s_i\) is an element of the field \(F\).

Peterson–Gorenstein–Zierler decoder

Calculating syndromes

  1. Suppose the codeword we received is \((r_0, r_1, \ldots, r_{n-1})\), where each value is an element of the field \(F\). This is known. We defined the received codeword polynomial straightforwardly:

    \(\begin{align} r(x) &= \displaystyle \sum_{i=0}^{n-1} r_i x^i \\ &= r_0 x^0 + r_1 x^1 + \cdots + r_{n-1} x^{n-1}. \end{align}\)

  2. On the receiving side here, we don’t know the sent values \((s_0, s_1, \ldots, s_{n-1})\), but will go ahead and define the error values anyway. Let \(e_0 = r_0 \!-\! s_0\), \(e_1 = r_1 \!-\! s_1\), \(\ldots\), \(e_{n-1} = r_{n-1} \!-\! s_{n-1}\). Define the error polynomial straightforwardly (again, we don’t know its value right now):

    \(\begin{align} e(x) &= r(x) - s(x) = \displaystyle \sum_{i=0}^{n-1} e_i x^i \\ &= e_0 x^0 + e_1 x^1 + \cdots + e_{n-1} x^{n-1}. \end{align}\)

  3. Now for some actual math: Define the \(m\) syndrome values for \(0 ≤ i < m\), by evaluating the received codeword polynomial at various powers of the generator:

    \(\begin{align} S_i &= r(α^i) = s(α^i) + e(α^i) \\ &= 0 + e(α^i) = e(α^i) \\ &= e_0 α^{0i} + e_1 α^{1i} + \ldots + e_{n-1} α^{(n-1)i}. \end{align}\)

    This works because by construction, \(s(α^i) = 0\) for \(0 ≤ i < m\), which is because \(s(x)\) is divisible by the generator polynomial \(g(x) = (x - α^0)(x - α^1) \cdots (x - α^{m-1})\). Thus we see that the syndromes only depend on the errors that were added to the sent codeword, and don’t depend at all on the value of the sent codeword or the original message.

    If all the syndrome values are zero, then the codeword is already correct, there is nothing to fix, and we are done.

  4. We can show all of these \(m\) syndrome equations explicitly:

    \(\left\{ \begin{align} S_0 &= e_0 α^{0 × 0} + e_1 α^{1 × 0} + \cdots + e_{n-1} α^{(n-1) × 0}. \\ S_1 &= e_0 α^{0 × 1} + e_1 α^{1 × 1} + \cdots + e_{n-1} α^{(n-1) × 1}. \\ \cdots \\ S_{m-1} &= e_0 α^{0(m-1)} + e_1 α^{1(m-1)} + \cdots + e_{n-1} α^{(n-1)(m-1)}. \end{align} \right.\)

  5. And rewrite this linear system as a matrix:

    \(\left[ \begin{matrix} e_0 α^{0 × 0} + e_1 α^{1 × 0} + \cdots + e_{n-1} α^{(n-1) × 0} \\ e_0 α^{0 × 1} + e_1 α^{1 × 1} + \cdots + e_{n-1} α^{(n-1) × 1} \\ \vdots \\ e_0 α^{0(m-1)} + e_1 α^{1(m-1)} + \cdots + e_{n-1} α^{(n-1)(m-1)} \end{matrix} \right] = \left[ \begin{matrix} S_0 \\ S_1 \\ \vdots \\ S_{m-1} \end{matrix} \right]\).

  6. And factorize the matrix:

    \(\left[ \begin{matrix} α^{0 × 0} & α^{1 × 0} & \cdots & α^{(n-1) × 0} \\ α^{0 × 1} & α^{1 × 1} & \cdots & α^{(n-1) × 1} \\ \vdots & \vdots & \ddots & \vdots \\ α^{0(m-1)} & α^{1(m-1)} & \cdots & α^{(n-1)(m-1)} \end{matrix} \right] \left[ \begin{matrix} e_0 \\ e_1 \\ \vdots \\ e_{n-1} \end{matrix} \right] = \left[ \begin{matrix} S_0 \\ S_1 \\ \vdots \\ S_{m-1} \end{matrix} \right]\).

Finding error locations

  1. Choose \(ν\) (Greek lowercase nu) as the number of errors to try to find. We require \(1 ≤ ν ≤ \left\lfloor m/2 \right\rfloor\). Unless there are time or space constraints, it is best to set \(ν\) as large as possible to catch as many errors as the error-correcting code allows.

  2. Let’s pretend we know the \(ν\) error locations as \(I_0, I_1, \ldots, I_{ν-1}\). This is an orderless set of unique indexes into the received codeword of \(n\) values, so each element satisfies \(0 ≤ I_i < n\).

    The significance of this set/sequence of indexes \(I_i\) is that the error values at these indexes may be non-zero, but all other error values must be zero. In other words, \(e_{I_0}\), \(e_{I_1}\), \(\ldots\), \(e_{I_{ν-1}}\) can each be any value (possibly zero), but the \(e_i\) values at other indexes must be zero.

  3. Define some new variables for old values, but based on the error location indexes \(I_i\), for \(0 ≤ i < ν\):

    \(\begin{align} X_i &= α^{I_i}. \\ Y_i &= e_{I_i}. \end{align}\)

  4. Because we know all other \(e_i\) values are zero, we can substitute the new variables and rewrite the system of syndrome equations as follows:

    \(\left[ \begin{matrix} X_0^0 & X_1^0 & \cdots & X_{ν-1}^0 \\ X_0^1 & X_1^1 & \cdots & X_{ν-1}^1 \\ \vdots & \vdots & \ddots & \vdots \\ X_0^{m-1} & X_1^{m-1} & \cdots & X_{ν-1}^{m-1} \end{matrix} \right] \left[ \begin{matrix} Y_0 \\ Y_1 \\ \vdots \\ Y_{ν-1} \end{matrix} \right] = \left[ \begin{matrix} S_0 \\ S_1 \\ \vdots \\ S_{m-1} \end{matrix} \right]\).

    (At this point we still don’t know any of the \(X_i\) or \(Y_i\) values. However there is a clever multi-step procedure that will reveal them.)

  5. Define the error locator polynomial based on the unknown \(X_i\) variables:

    \(\begin{align} Λ(x) &= \displaystyle \prod_{i=0}^{ν-1} (1 - X_i x) \\ &= 1 + Λ_1 x + Λ_2 x^2 + \cdots + Λ_ν x^ν. \end{align}\)

    (In other words, after all the factors are multiplied and expanded, the polynomial \(Λ(x)\) has the sequence of \(ν+1\) monomial coefficients \((1, Λ_1, Λ_2, \ldots, Λ_ν)\).)

  6. By construction we know that for each \(0 ≤ i < ν\), we have:

    \(\begin{align} 0 &= Λ(X_i^{-1}) \\ &= 1 + Λ_1 X_i^{-1} + Λ_2 X_i^{-2} + \cdots + Λ_ν X_i^{-ν}. \end{align}\)

    The polynomial is zero at these points because the product contains the factor \((1 - X_i X_i^{-1}) = 1 - 1 = 0\).

  7. For \(0 ≤ i < ν\) and arbitrary \(j ∈ \mathbb{Z}\), let’s multiply all sides of the equation by \(Y_i X_i^{j+ν}\):

    \((Y_i X_i^{j+ν})0 = (Y_i X_i^{j+ν}) Λ(X_i^{-1}) = (Y_i X_i^{j+ν})(1 + Λ_1 X_i^{-1} + Λ_2 X_i^{-2} + \cdots + Λ_ν X_i^{-ν}). \\ 0 = Y_i X_i^{j+ν} Λ(X_i^{-1}) = Y_i X_i^{j+ν} + Λ_1 Y_i X_i^{j+ν-1} + Λ_2 Y_i X_i^{j+ν-2} + \cdots + Λ_ν Y_i X_i^j.\)

  8. Now sum this equation over our full range of \(i\) values:

    \(\begin{align} 0 &= \displaystyle \sum_{i=0}^{ν-1} Y_i X_i^{j+ν} Λ(X_i^{-1}) \\ &= \sum_{i=0}^{ν-1} \left( Y_i X_i^{j+ν} + Λ_1 Y_i X_i^{j+ν-1} + Λ_2 Y_i X_i^{j+ν-2} + \cdots + Λ_ν Y_i X_i^j \right) \\ & = \left(\sum_{i=0}^{ν-1} Y_i X_i^{j+ν}\right) + Λ_1 \left(\sum_{i=0}^{ν-1} Y_i X_i^{j+ν-1}\right) + Λ_2 \left(\sum_{i=0}^{ν-1} Y_i X_i^{j+ν-2}\right) + \cdots + Λ_ν \left(\sum_{i=0}^{ν-1} Y_i X_i^j\right) \\ & = S_{j+ν} + Λ_1 S_{j+ν-1} + Λ_2 S_{j+ν-2} + \cdots + Λ_ν S_j. \end{align}\)

  9. By rearranging terms, this implies the following, which is valid for \(0 ≤ j < ν\):

    \(Λ_ν S_j + Λ_{ν-1} S_{j+1} + \cdots + Λ_1 S_{j+ν-1} = -S_{j+ν}.\)

  10. Substituting all valid values of \(j\), we can form a system of \(ν\) linear equations:

    \(\left\{ \begin{align} Λ_ν S_0 + Λ_{ν-1} S_1 + \cdots + Λ_1 S_{ν-1} & = -S_ν. \\ Λ_ν S_1 + Λ_{ν-1} S_2 + \cdots + Λ_1 S_ν & = -S_{ν+1}. \\ \vdots & \\ Λ_ν S_{ν-1} + Λ_{ν-1} S_ν + \cdots + Λ_1 S_{2ν-2} & = -S_{2ν-1}. \end{align} \right.\)

  11. We can rewrite and factorize this system into matrices and vectors:

    \( \left[ \begin{matrix} S_0 & S_1 & \cdots & S_{ν-1} \\ S_1 & S_2 & \cdots & S_ν \\ \vdots & \vdots & \ddots & \vdots \\ S_{ν-1} & S_ν & \cdots & S_{2ν-2} \end{matrix} \right] \left[ \begin{matrix} Λ_ν \\ Λ_{ν-1} \\ \vdots \\ Λ_1 \end{matrix} \right] = \left[ \begin{matrix} -S_ν \\ -S_{ν+1} \\ \vdots \\ -S_{2ν-1} \end{matrix} \right] \).

  12. Now we finally have something that can be solved. Put the above coefficients into a \(ν × (ν+1)\) augmented matrix and run it through Gauss–Jordan elimination. If the system of linear equations is inconsistent, then there are too many errors in the codeword and they cannot be repaired at the moment. If \(ν\) is not at the maximum possible value, then it might be possible to fix the codeword by re-running the procedure with a larger \(ν\). Otherwise the codeword cannot be fixed in any way at all.

    We need to take special care if the linear system is consistent but under-determined. This happens when the codeword contains fewer than \(ν\) errors, because any of the locations where the error value is zero could be selected as a virtual “error”. Each different set of error location indexes \(\{I_i \: | \: 0 ≤ i < ν\}\) (unknown right now) would produce a different error locator polynomial \(Λ(x)\). For example if the set of true error locations is \(\{2, 5\}\) and we want to find exactly 3 error locations, then \(\{0, 2, 5\}\), \(\{2, 4, 5\}\), etc. are all equally valid solutions.

    The key insight is that we only need some particular solution to the linear system. We don’t care about parametrizing the space of all solutions or anything else. One approach is to treat all the dependent variables as zero. When we scan each row of the matrix in reduced row echelon form (RREF), we look at the column of the leftmost non-zero coefficient. If the column is rightmost, then the linear system is inconsistent. Otherwise the \(i\)-th column corresponds to the \(i\)-th variable in the linear system, which corresponds to the coefficient \(Λ_{ν-i}\). By setting the dependent variables (the columns without pivots) to zero, we don’t need to adjust the values of any other variables.

  13. Once we obtain the coefficients \(Λ_1, Λ_2, \ldots, Λ_ν\), we can evaluate the polynomial \(Λ(x)\) at any point we want. Plug in the values \(x = α^{-i}\) for \(0 ≤ i < n\) and check to see if \(Λ(α^{-i}) = 0\). For the solutions found, put the \(i\) values into the variables \(I_0\), \(I_1\), etc.

    Note that we may find any number of solutions, anywhere between \(0\) and \(n\) inclusive. If the number of solutions found is more than \(ν\), then it is generally impossible to recover from the errors. If the number of solutions found is less than \(ν\), then we simply redefine \(ν\) to this lower number for the remaining part of the decoder algorithm, and delete the higher-numbered \(I_i\) variables. The solutions can be found and saved into the \(I_i\) variables in any order.

    It’s unnecessary to test values of \(i\) that are at least \(n\), because errors cannot occur outside of valid indexes into the codeword. However it’s possible to find solutions of \(Λ(α^{-i}) = 0\) where \(i ≥ n\). Not only can these solutions be ignored, they imply that the error-correcting capability has been exceeded because we can’t derive self-consistent information about where these errors are located. Thus this situation counts as an early failure.

Calculating error values

  1. At this point \(ν\) might have been redefined as a smaller number, and we know the error location indexes \(I_0\), \(I_1\), ..., \(I_{ν-1}\). Because of this, we know all the values \(X_i = α^{I_i}\) for \(0 ≤ i < ν\).

  2. Since we know the \(X_i\) values and \(S_i\) values, we can solve one of the earlier equations for the vector of values \(Y_i = e_{I_i}\) to obtain the error values/magnitudes:

    \(\left[ \begin{matrix} X_0^0 & X_1^0 & \cdots & X_{ν-1}^0 \\ X_0^1 & X_1^1 & \cdots & X_{ν-1}^1 \\ \vdots & \vdots & \ddots & \vdots \\ X_0^{m-1} & X_1^{m-1} & \cdots & X_{ν-1}^{m-1} \end{matrix} \right] \left[ \begin{matrix} Y_0 \\ Y_1 \\ \vdots \\ Y_{ν-1} \end{matrix} \right] = \left[ \begin{matrix} S_0 \\ S_1 \\ \vdots \\ S_{m-1} \end{matrix} \right]\).

    We simply run a Gauss–Jordan elimination algorithm here. If the linear system is consistent and independent, then a unique solution exists and we are going to finish successfully. Otherwise the system is inconsistent so it is impossible to satisfy all the syndrome constraints, or the system is dependent/under-determined so there is no unique solution (even though one is required).

  3. With the error locations \(I_i\) and error values \(Y_i\) known, we can attempt to fix the received codeword. Let the repaired codeword polynomial be \(r'(x) = r'_0 x^0 + r'_1 x^1 + \ldots r'_{n-1} x^{n-1}\). For \(0 ≤ i < ν\), we set the coefficient \(r'_{I_i} = r_{I_i} \!-\! Y_i = r_{I_i} \!-\! e_{I_i}\). For each index \(i\) where \(0 ≤ i < n\) and \(i\) is not present in the set \(\{I_j \: | \: 0 ≤ j < ν\}\), we simply copy the value \(r'_i = r_i\) (i.e. don’t change the codeword values at locations not identified as errors).

  4. By design, the repaired codeword polynomial \(r'(x)\) will have all-zero syndrome values – because if not, then the matrix-solving process would have identified that the linear system is inconsistent and has no solution. We can still check syndrome codes for paranoia, but here we’re done.

    This decoded codeword is the best guess based on the received codeword. If the number of errors introduced into the codeword is at most \(ν\), then the decoding process is guaranteed to succeed and yield the original message. Otherwise for a received codeword with too many errors, all outcomes are possible – the decoding process might explicitly indicate a failure (most likely), a valid message is decoded but it mismatches the original message (occasionally), or the correct message is recovered (very unlikely).


Notes

Time complexity

The encoder is short and simple, and runs in \(Θ(mk)\) time. It is unlikely that the encoder can be significantly improved in conciseness or speed.

The PGZ decoder algorithm described here runs in \(Θ(m^3 + mk)\) time, assuming we choose the maximum error-correcting capability of \(ν = \lfloor m/2 \rfloor\). This cubic runtime is not ideal and can be improved to quadratic with other algorithms. The Berlekamp–Massey algorithm can find the error locator polynomial in \(Θ(m^2)\) time, and the Forney algorithm can find the error values also in \(Θ(m^2)\) time.

Alternatives to a generator

The algorithm described here uses powers of the generator \(α\) starting at \(0\), i.e. \((α^0, α^1, \ldots, α^{m-1})\). Some variants of Reed–Solomon ECC use powers starting at \(1\), i.e. \((α^1, α^2, \ldots, α^m)\).

In fact, the algorithm doesn’t seem to need powers or a generator at all. It appears to work as long as we can choose \(m\) unique non-zero values in the field \(F\). For one, this means we can apply RS ECC in infinite fields such as the rational numbers \(\mathbb{Q}\) (for pedagogical but not practical purposes), because infinite fields never have a multiplicative generator.

Although I haven’t modified the math and code to show that it works, it should be possible to adjust them to accommodate a set of unique values instead of generator powers. Suppose we have a sequence of \(m\) unique non-zero values named \((α_0, α_1, \ldots, α_{m-1})\). Then going through the mathematical derivation, we would replace every instance of \(α^i\) with \(α_i\) and it should probably all work out.

Source code

Java
Python

Note: The field and matrix code originally comes from my Gauss–Jordan elimination over any field page. The code has been modified to delete unnecessary classes and methods, and add new classes.

More info