Core Concepts
SleepPPG-Net2, a deep learning model, achieves improved generalization performance for four-class sleep staging from raw photoplethysmography (PPG) signals compared to state-of-the-art benchmarks.
Abstract
The study aimed to develop a generalizable deep learning model, called SleepPPG-Net2, for four-class (wake, light, deep, and rapid eye movement (REM)) sleep staging from raw PPG physiological time-series. Six sleep datasets, totaling 2,574 patient recordings, were used to train and evaluate the model.
Key highlights:
SleepPPG-Net2 was developed using a multi-source domain training approach to enhance the model's ability to generalize.
SleepPPG-Net2 outperformed two state-of-the-art benchmarks, SleepPPG-Net and BM-DTS, on five independent target domain datasets, with up to 19% improvement in Cohen's kappa.
Performance disparities were observed in relation to patient age, sex, and sleep apnea severity.
SleepPPG-Net2 demonstrated improved estimation of standard sleep measures, such as total sleep time and sleep efficiency, compared to the benchmarks.
Error analysis revealed that factors like age, sleep apnea severity, and sex had significant effects on the model's performance.
The results show that SleepPPG-Net2 sets a new standard for sleep staging from raw PPG time-series and has the potential to enable widespread adoption of PPG-based sleep analysis in clinical and home settings.
Stats
The total sleep time (TST) can be estimated with a mean absolute error (MAE) of 24.2 minutes.
Sleep efficiency (SE) can be estimated with a MAE of 4.56%.
The fraction of light sleep (FRLight) can be estimated with a MAE of 9.47%.
The fraction of deep sleep (FRDeep) can be estimated with a MAE of 8.53%.
The fraction of REM sleep (FRREM) can be estimated with a MAE of 3.72%.