Core Concepts
The author investigates the robustness of graph neural fractional-order differential equation models, highlighting their superiority over traditional integer-order models in terms of output perturbation bounds. The approach integrates fractional calculus to enhance long-term memory and resilience against adversarial attacks.
Abstract
The study explores the robustness of graph neural fractional-order differential equation (FDE) models compared to traditional integer-order models. By incorporating fractional derivatives, the models exhibit tighter output perturbation bounds and enhanced resilience in the face of input and topology disturbances. The empirical evaluations confirm the superior robustness of FDE models, showcasing their potential in adversarially robust applications. Recent advances have seen a growing use of dynamical system theory in designing and understanding GNNs, with models like CGNN, GRAND, GraphCON employing ordinary differential equations for node feature evolution. The introduction of fractional calculus allows for slow algebraic convergence, mitigating oversmoothing problems observed in standard exponential convergence methods. The study delves into the impact of the fractional order parameter β on the robustness attributes of FROND, establishing a monotonic relationship between perturbation bounds and β values. Experimental results demonstrate that FROND outperforms traditional GNNs under various attack scenarios, validating its enhanced robustness.
Stats
Cora dataset accuracy: 83.50%
Citeseer dataset accuracy: 74.48%
Pubmed dataset accuracy: 88.46%
Quotes
"Our empirical evaluations further confirm the enhanced robustness of graph neural FDE models."
"The superiority of graph neural FDE models over graph neural ODE models has been established in environments free from attacks or perturbations."
"Recent studies have ventured into the intersection of GNNs and fractional calculus."