Embeddings are crucial for representing complex information concisely, but interpreting them directly is challenging. This paper introduces ELM, a framework that leverages large language models to interact with embeddings, enabling querying and exploration. By training adapter layers to map domain embeddings into token-level embedding space, ELM facilitates interpretation of continuous domain embeddings using natural language. The study demonstrates the effectiveness of ELM on various tasks like enhancing concept activation vectors, communicating novel embedded entities, and decoding user preferences in recommender systems. The approach offers a dynamic way to navigate and understand complex embedding representations.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Guy Tennenho... at arxiv.org 03-14-2024
https://arxiv.org/pdf/2310.04475.pdfDeeper Inquiries