The article presents a novel speech-based approach for automatic suicide risk assessment in emergency medicine. The researchers collected a dataset of speech recordings from 20 patients undergoing emergency admission, including picture descriptions, reading of neutral texts, and isolated vowel productions.
The key highlights and insights are:
The researchers extracted three sets of audio features - interpretable speech and acoustic features, deep learning-based spectral representations, and embeddings from pre-trained audio Transformers.
Using a leave-one-subject-out validation scheme, the best speech-only model achieved a balanced accuracy of 66.2% in classifying high vs. low suicide risk.
Integrating the speech features with patient metadata, such as history of suicide attempts and access to firearms, significantly improved the performance, reaching a balanced accuracy of 94.4%.
The metadata integration, especially including information about suicide attempts, was found to be the most discriminative feature for suicide risk assessment.
Recordings of picture descriptions, which require higher cognitive effort, were observed to be more informative for suicide risk prediction compared to neutral text readings or isolated vowel productions.
The study demonstrates the efficacy of the proposed speech-based approach combined with patient metadata for automatic suicide risk assessment in emergency medicine settings, where timely intervention is critical.
To Another Language
from source content
arxiv.org
Viktige innsikter hentet fra
by Shah... klokken arxiv.org 04-19-2024
https://arxiv.org/pdf/2404.12132.pdfDypere Spørsmål