Detecting Conceptual Abstraction Mechanisms in Large Language Models
Large language models employ some form of conceptual abstraction, but the mechanisms behind this are not well understood. This study examines whether simple linguistic abstraction mechanisms, such as hypernymy, are present in the attention patterns of the BERT language model.