Claude Shannon's "A Mathematical Theory of Communication"; provided the basis for the digital communication revolution. As part of that ground-breaking work, he identified the greatest rate (capacity) at which data can be communicated over a noisy channel. He also provided an algorithm for achieving it, based on random codes and a code-centric maximum Maximum Likelihood (ML) decoding, where channel outputs are compared to all possible codewords to select the most likely candidate based on the observed output. Despite its mathematical elegance, his algorithm is impractical from a complexity perspective and much work in the intervening 70 years has focused on co-designing codes and decoders that enable reliable communication at high rates.
We introduce a new algorithm for a noise-centric, rather than code-centric, ML decoding. The algorithm is based on the principle that the receiver rank orders noise sequences from most likely to least likely, and guesses noises accordingly. Subtracting noise from the received signal in that order, the first instance that results in an element of the code-book is the ML decoding. For common additive noise channels, we establish that the algorithm is capacity-achieving for uniformly selected code-books, providing an intuitive alternate approach to the channel coding theorem. We illustrate the practical usefulness of our approach and the fact that it renders the decoding of random codes feasible. The complexity of the decoding is, for the sorts of channels generally used in commercial applications, quite low, unlike code-centric ML.
This work is joint with Ken Duffy (Maynooth University).