Core Concepts
Generative models like DILED integrate core capabilities for diverse data types, offering enhanced performance.
Abstract
DILED introduces generalized diffusion with learnable encoder-decoder to seamlessly integrate generation, reconstruction, and representation capabilities. It outperforms existing models in handling diverse data types through extensive experiments on text, proteins, and images. DILED's flexibility and performance potential for broad applications are highlighted.
Stats
Existing model families excel in specific capabilities but fall short in others.
Extensive experiments demonstrate DILED's flexibility to handle diverse data types.
DILED shows strong improvement over various existing models.
Quotes
"The vast applications of deep generative models are anchored in three core capabilities—generating new instances, reconstructing inputs, and learning compact representations."
"DILED generalizes the Gaussian noising-denoising in standard diffusion by introducing parameterized encoding-decoding."
"DILED demonstrates comprehensive capabilities across a wide range of tasks on different data modalities."