This paper introduces the "normalized transport," a novel method for bijectively mapping between stationary ergodic measures on sequences of different alphabets (like letters and words) using self-avoiding codes, preserving stationarity and ergodicity.
Entropy and information can be generalized beyond Shannon's original framework by considering arbitrary loss functions to quantify uncertainty reduction, rather than just message length. This provides a unified perspective on various information-theoretic quantities.
This article proposes a two-parameter generalization of the Tsallis entropy, called the generalized Tsallis entropy, and derives its fundamental information-theoretic properties, including pseudo-additivity, sub-additivity, joint convexity, and information monotonicity.
Rényi divergence and Sibson mutual information are proposed as exact α-leakage measures, quantifying the maximum and average information gain an adversary can obtain on sensitive data through a privacy-preserving channel.
The paper derives the generalized mutual information (GMI) of ordered reliability bits guessing random additive noise decoding (ORBGRAND) for memoryless binary-input channels with general output conditional probability distributions. The analysis provides insights into understanding the gap between the ORBGRAND achievable rate and the channel mutual information. As an application, the paper studies the ORBGRAND achievable rate for bit-interleaved coded modulation (BICM), showing that the gap is typically small, suggesting the feasibility of ORBGRAND for high-order coded modulation schemes.
The second-order randomized identification capacity of the Additive White Gaussian Noise Channel (AWGNC) has the same form as the second-order transmission capacity, with the only difference being that the maximum number of messages in randomized identification scales double exponentially in the blocklength.
For a wide range of channels and pairwise-independent code ensembles, expurgating an arbitrarily small fraction of codewords from a randomly selected code results in a code that attains the expurgated exponent with high probability.
The authors develop an analytical method to estimate the average differential entropy of a Gaussian mixture distribution, where the component means are i.i.d. Gaussian vectors. They obtain a series expansion in the ratio of the variances of the component means and the shared covariance matrix, providing an approximation with a quantifiable error bound.
A prefix code is optimal if and only if it is complete and strongly monotone.
The paper investigates the relationship between two practical metrics of high-order interdependencies: the redundancy-synergy index (RSI), which captures directed interdependencies, and the O-information, which captures undirected interdependencies. The results reveal tight links between these two quantities and provide interpretations in terms of likelihood ratios and information geometry.