The paper introduces AQuA, a method for automatically assessing the deliberative quality of individual comments in online discussions. The approach combines predictions from adapter models trained on 20 different deliberative aspects with insights gained from both expert and non-expert annotations of the same data.
The key highlights are:
20 adapter models are trained to predict scores for different deliberative criteria, such as rationality, reciprocity, and civility.
Correlation coefficients between the expert annotations and non-expert perceptions of deliberativeness are calculated to determine the weights for each deliberative aspect in the final AQuA score.
The weighted sum of the adapter predictions is normalized to a 0-5 scale, resulting in the AQuA score, which provides a single, interpretable measure of deliberative quality.
Experiments show that the AQuA score aligns well with expert annotations on other datasets and can be used to automatically assess the deliberative quality of online comments.
The analysis of the correlation coefficients confirms findings from deliberation research, highlighting the importance of rational arguments, respectful engagement, and personal storytelling in perceived deliberative quality.
toiselle kielelle
lähdeaineistosta
arxiv.org
Tärkeimmät oivallukset
by Maike Behren... klo arxiv.org 04-04-2024
https://arxiv.org/pdf/2404.02761.pdfSyvällisempiä Kysymyksiä