The "GraphRCG" framework introduces self-conditioned modeling to capture graph distributions and self-conditioned guidance for generating graphs. Extensive experiments demonstrate its superior performance over existing methods in terms of graph quality and fidelity to training data. The framework combines continuous and discrete diffusion for effective generation of a wide range of graph structures.
The task of generating graphs aligned with specific distributions is crucial in various fields such as drug discovery, public health, and traffic modeling. Deep generative models have been studied prevalently to address this challenge by learning complex structural patterns in graphs.
Existing works often implicitly capture distribution through optimization of generators, potentially overlooking distribution intricacies. The proposed framework explicitly models graph distributions using representations and leverages them for guided generation.
Challenges in graph data include complex dataset patterns like varying sparsity and inconsistent clustering coefficients. Unlike image generation, graph generation is inherently sequential and discrete, requiring step-wise guidance for accurate representation of learned distributions.
The study highlights the importance of capturing and utilizing training data distributions for enhanced graph generation performance. The innovative self-conditioned approach demonstrates superior results across various real-world datasets compared to state-of-the-art baselines.
翻譯成其他語言
從原文內容
arxiv.org
深入探究