The paper introduces a novel "decoder-only" hypernetwork framework for compressing implicit neural representations (INRs). Unlike previous hypernetwork approaches for INRs, the proposed method does not require offline training on a target signal class. Instead, it can be optimized at runtime using only the target data instance.
The key aspects of the method are:
Decoder-only architecture: The hypernetwork acts as a decoder-only module, generating the weights of a target INR architecture from a low-dimensional latent code. This avoids the need for a separate encoding step conditioned on a training dataset.
Random projection decoder: The hypernetwork uses a fixed random projection to map the latent code to the target network weights, enabling a highly compact representation.
Runtime optimization: The latent code is optimized at runtime to approximate the target INR, without requiring any offline training data.
The authors demonstrate the effectiveness of this approach on image compression and occupancy field representation tasks. Compared to prior methods like COIN, the decoder-only hypernetwork achieves improved rate-distortion performance while allowing smooth control of the bit-rate by varying the latent code dimension, without the need for neural architecture search.
The paper also discusses interesting properties of the decoder-only hypernetwork, such as its ability to incorporate positional encoding without increasing the parameter count, and a method to directly project a pre-trained INR into the hypernetwork framework.
toiselle kielelle
lähdeaineistosta
arxiv.org
Tärkeimmät oivallukset
by Cameron Gord... klo arxiv.org 03-29-2024
https://arxiv.org/pdf/2403.19163.pdfSyvällisempiä Kysymyksiä