Core Concepts
Distributed representations in deep neural networks are more interpretable than local representations, especially in deeper layers, as they are easier for humans to understand and are more heavily relied upon by the model for decision-making.
Stats
Participants in the distributed representation condition achieved an average accuracy of 83.5% compared to 78.8% in the local representation condition in Experiment I.
Experiment II, controlling for semantic confounders, showed similar trends with distributed representations outperforming local representations.
Feature importance analysis revealed that the model relied significantly more on features derived from distributed representations than local representations, with a statistically significant difference (z = -5.86, p < .001, Mann-Whitney U Test).