Good error-correcting codes based on very sparse matrices

1999 IEEE Transactions on Information Theory 3,681 citations

Abstract

We study two families of error-correcting codes defined in terms of very sparse matrices. "MN" (MacKay-Neal (1995)) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties. The decoding of both codes can be tackled with a practical sum-product algorithm. We prove that these codes are "very good", in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit. This result holds not only for the binary-symmetric channel but also for any channel with symmetric stationary ergodic noise. We give experimental results for binary-symmetric channels and Gaussian channels demonstrating that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed, the performance of Gallager codes is almost as close to the Shannon limit as that of turbo codes.

Keywords

Turbo codeNoisy-channel coding theoremBCJR algorithmConcatenated error correction codeSerial concatenated convolutional codesAlgorithmLow-density parity-check codeLinear codeBinary symmetric channelDecoding methodsConvolutional codeBlock codeMathematicsComputer scienceDiscrete mathematics

Affiliated Institutions

Related Publications

Publication Info

Year
1999
Type
article
Volume
45
Issue
2
Pages
399-431
Citations
3681
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

3681
OpenAlex

Cite This

David Mackay (1999). Good error-correcting codes based on very sparse matrices. IEEE Transactions on Information Theory , 45 (2) , 399-431. https://doi.org/10.1109/18.748992

Identifiers

DOI
10.1109/18.748992