Personal tools

# Generalized Minimum Distance Decoding with Arbitrary Error/Erasure Tradeoff

 Author(s):

Creation Date: Oct 28, 2011

Published In: Dec 2011

Paper Type: Dissertation

Long concatenated codes can be decoded by using algorithms for their comparatively short inner and outer codes. This is a measurable advantage, since the complexity of most decoders grows at least linearly with the code length. The most striking property of concatenation is that this comes at no charge, i.e., long concatenated codes can correct the same number of errors as long non-concatenated codes with the same distance. This was proven in 1966 by Forney, together with an actual decoding algorithm that executes the outer decoder $Z_0$ times. This algorithm is called Generalized Minimum Distance (GMD) decoding, and $Z_0$ is fixed by the properties of the outer code.
In this dissertation, we investigate two main questions: What can we achieve if we allow execution of the outer decoder only $Z<Z_0$ times and to which extent can we compensate the expected loss of error-correcting capabilities by using advanced outer decoders? A decoder can be rated by its error/erasure tradeoff $\lambda$, a figure that specifies the penalty of knowing neither location nor value of an error in contrast to knowing its location but not its value. Until 1997, only decoders with $\lambda=2$ were known for RS codes. Since then and sparked by the discovery of the Sudan algorithm, the coding theory community has invented a plethora of algorithms with improved tradeoff $\lambda\in(1,2)$. We investigate such advanced decoders and generalize to $\lambda\in(1,2]$ results of Forney and other authors that are restricted to $\lambda=2$. To the best of our knowledge, we are the first to present this generalization.
In the first part of the thesis, we introduce basic algebraic codes within the framework of code concatenation. We list classical decoding algorithms with $\lambda=2$ as well as the most important advanced ones with $\lambda\in(1,2)$. In order to allow for a general description of GMD decoding for all considered algorithms, we introduce a means of expressing their error-correcting capabilities by a unified function, the Generalized Decoding Radius. The second part considers the decoding radius of GMD decoding, i.e., the maximum number of errors that are correctable with guarantee. We derive two optimal variants of GMD decoding for arbitrary $\lambda\in(1,2]$ and any number $Z\leq Z_0$ of outer decoding trials and analyze their properties. Depending on the properties of outer and inner code, we can always state which variant is superior over the other. Our attention is shifted towards the probability of a decoding success in the third part. We show how this probability can be maximized for any $\lambda\in(1,2]$ and any $Z\leq Z_0$ and express it analytically.