The "GraphRCG" framework introduces self-conditioned modeling to capture graph distributions and self-conditioned guidance for generating graphs. Extensive experiments demonstrate its superior performance over existing methods in terms of graph quality and fidelity to training data. The framework combines continuous and discrete diffusion for effective generation of a wide range of graph structures.
The task of generating graphs aligned with specific distributions is crucial in various fields such as drug discovery, public health, and traffic modeling. Deep generative models have been studied prevalently to address this challenge by learning complex structural patterns in graphs.
Existing works often implicitly capture distribution through optimization of generators, potentially overlooking distribution intricacies. The proposed framework explicitly models graph distributions using representations and leverages them for guided generation.
Challenges in graph data include complex dataset patterns like varying sparsity and inconsistent clustering coefficients. Unlike image generation, graph generation is inherently sequential and discrete, requiring step-wise guidance for accurate representation of learned distributions.
The study highlights the importance of capturing and utilizing training data distributions for enhanced graph generation performance. The innovative self-conditioned approach demonstrates superior results across various real-world datasets compared to state-of-the-art baselines.
toiselle kielelle
lähdeaineistosta
arxiv.org
Tärkeimmät oivallukset
by Song Wang,Zh... klo arxiv.org 03-05-2024
https://arxiv.org/pdf/2403.01071.pdfSyvällisempiä Kysymyksiä