核心概念
Proposing a framework to prevent data leakage in knowledge tracing models and introducing model variations to enhance performance.
統計
Many KT models expand the sequence of item-student interactions into KC-student interactions by replacing learning items with their constituting KCs. This often results in a longer sequence length.
The first problem is the model’s ability to learn correlations between KCs belonging to the same item, which can result in the leakage of ground truth labels and hinder performance.
The second problem is that available benchmark implementations ignore accounting for changes in sequence length when expanding KCs, leading to different models being tested with varying sequence lengths but still compared against the same benchmark.
引用
"Models trained using this method can also learn to leak data between KCs of the same question and thus suffer from degrading performance."
"To address these problems, we introduce a general masking framework that mitigates the first problem and enhances the performance of such KT models while preserving the original model architecture without significant alterations."