Core Concepts
Large language models (LLMs) can be effectively leveraged to handle various graph analytics tasks, including graph query processing, graph inference and learning, and graph-based applications, by integrating LLM capabilities with graph-specific techniques.
Abstract
This survey provides a comprehensive investigation of existing research on the application of large language models (LLMs) to graph data analysis. It delineates the field of LLM-based generative graph analytics (LLM-GGA) into three principal components:
LLM-based graph query processing (LLM-GQP): This involves the integration of graph analytics techniques and LLM prompts for efficient query processing, including graph understanding and knowledge graph-based augmented retrieval.
LLM-based graph inference and learning (LLM-GIL): This focuses on learning and reasoning over graphs, encompassing graph learning, graph-formed reasoning, and graph representation.
Graph-LLM-based applications: This explores the use of the graph-LLM framework to address non-graph tasks, such as recommendation systems.
The survey categorizes these three main components into a total of six research directions, providing a guideline for researchers to conduct more in-depth studies. It also analyzes the advantages and limitations of current methodologies and suggests avenues for future research. Additionally, the survey organizes resources related to benchmarks, evaluations, and code links within the LLM-GGA domain to facilitate further investigation by researchers.