מושגי ליבה
LLM-DCD, a novel approach that integrates Large Language Models (LLMs) with Differentiable Causal Discovery (DCD), improves the accuracy and interpretability of causal discovery from observational data by leveraging LLMs for informed initialization of causal graph structure.
סטטיסטיקה
The authors use five datasets from the bnlearn package: cancer (5 variables, 4 causal edges), sachs (11 variables, 17 causal edges), child (20 variables, 25 causal edges), alarm (37 variables, 46 causal edges), and hepar2 (70 variables, 123 causal edges).
The experiments involved 1000 observations for each dataset.
LLM-DCD (BFS) outperformed all baseline methods on the Alarm and Hepar2 datasets.
LLM-DCD (BFS) showed comparable results to the top-performing models on the Cancer, Sachs, and Child datasets.
ציטוטים
"LLM-DCD opens up new opportunities for traditional causal discovery methods like DCD to benefit from future improvements in the causal reasoning capabilities of LLMs."
"To our knowledge, LLM-DCD is the first method to integrate LLMs with differentiable causal discovery."