核心概念
A novel multi-link information bottleneck (ML-IB) scheme is proposed to design collaborative AI models between multiple devices and the mobile network, with a focus on efficient task-relevant data transmission.
摘要
The paper introduces a system model for collaborative intelligence between multiple devices and the mobile network, where the devices extract and transmit task-relevant features to the network side for inference.
The key highlights are:
- A novel performance metric CML-IB is proposed, which can evaluate both the accuracy of the AI task and the transmission overhead across multiple wireless links.
- A quantization scheme is designed to ensure compatibility with digital communication systems, with adjustable parameters like bit depth, breakpoints, and amplitudes.
- To make the CML-IB metric computable, a variational upper bound is derived and further approximated using the Log-Sum Inequality, leading to the CQML-IB metric.
- Based on CQML-IB, the Quantized Multi-Link Information Bottleneck (QML-IB) algorithm is developed to generate the collaborative AI models for both the devices and the network side.
- Numerical experiments demonstrate the superior performance of the QML-IB algorithm compared to the state-of-the-art method, in terms of AI task accuracy under various communication constraints.
统计
The paper reports the following key figures:
The PSNR (Peak Signal-to-Noise Ratio) is set within the range of 10 dB.
The communication latency is kept below 6 ms.
The number of quantization bits is varied from 1 to 4.
引用
"A hot viewpoint recently is that only the AI task-related data should be transmitted in future mobile networks."
"One of the core reasons is the lack of a proper performance metric for the latter scenario, which should effectively evaluate both the AI task performance as well as the communication costs across multiple device-network links."
"Remarkably, with only a 4-bit quantization of our framework, the error is already very close to the version without quantization. This clearly demonstrates that the performance degradation introduced by our quantization is negligible."