Kernekoncepter
The paper derives the generalized mutual information (GMI) of ordered reliability bits guessing random additive noise decoding (ORBGRAND) for memoryless binary-input channels with general output conditional probability distributions. The analysis provides insights into understanding the gap between the ORBGRAND achievable rate and the channel mutual information. As an application, the paper studies the ORBGRAND achievable rate for bit-interleaved coded modulation (BICM), showing that the gap is typically small, suggesting the feasibility of ORBGRAND for high-order coded modulation schemes.
Resumé
The paper focuses on analyzing the achievable rate of ordered reliability bits guessing random additive noise decoding (ORBGRAND) for general memoryless binary-input bit channels.
Key highlights:
- The paper derives the generalized mutual information (GMI) of ORBGRAND for memoryless binary-input channels with general output conditional probability distributions.
- The analysis provides insights into understanding the gap between the ORBGRAND achievable rate and the channel mutual information. It shows that the gap is related to the linearity of the cumulative distribution function (CDF) of the magnitude of the channel log-likelihood ratio (LLR).
- As an application, the paper studies the ORBGRAND achievable rate for bit-interleaved coded modulation (BICM) over AWGN and Rayleigh fading channels with various modulation orders and labelings.
- The numerical results indicate that the gap between the ORBGRAND achievable rate and the channel mutual information is typically small for BICM, suggesting the feasibility of ORBGRAND for high-order coded modulation schemes.
Statistik
The paper provides the following key figures and metrics:
Plots of the CDF of the magnitude of the channel LLR (Ψ(t)) under AWGN and Rayleigh fading channels for different SNR values (Fig. 1).
Plots of the ORBGRAND achievable rate and the channel mutual information under QPSK, 8PSK, and 16QAM for AWGN and Rayleigh fading channels with Gray and set-partitioning labelings (Figs. 5-8).