Core Concepts
A novel interpretable and efficient architecture for medical time series processing that achieves performance similar to state-of-the-art deep neural networks with several orders of magnitude fewer parameters.
Abstract
The authors propose a novel architecture called Sparse Mixture of Learned Kernels (SMoLK) for medical time series processing tasks. The key highlights are:
SMoLK learns a set of lightweight, flexible kernels to construct a single-layer neural network, providing interpretability, efficiency, and robustness.
The authors introduce novel parameter reduction techniques, such as weight absorption and correlated kernel pruning, to further reduce the size of the network.
On the task of photoplethysmography (PPG) artifact detection, SMoLK achieves greater than 99% of the performance of the state-of-the-art methods, using dramatically fewer parameters (2% of the parameters of Segade, and about half of the parameters of Tiny-PPG).
On single-lead atrial fibrillation detection, SMoLK matches the performance of a 1D-residual convolutional network, at less than 1% the parameter count, while exhibiting considerably better performance in the low-data regime.
The interpretability of SMoLK allows for direct inspection of the learned kernels and their contributions to the output, in contrast to the black-box nature of deep neural networks.
SMoLK is lightweight enough to be implemented on low-power wearable devices, making it suitable for real-time applications.
Stats
Our largest model has 45.3K parameters, while the state-of-the-art Segade model has 2.3M parameters.
Our medium model has 8.5K parameters, while the Tiny-PPG model has 85.9K parameters.
Our smallest model has 1.4K parameters.