Conceptos Básicos
Our FCert is the first certified defense against data poisoning attacks to few-shot classification with foundation models. FCert provably predicts the same label for a testing input under arbitrary data poisoning attacks when the total number of poisoned support samples is bounded.
Resumen
The content presents FCert, a certified defense against data poisoning attacks to few-shot classification with foundation models.
Key highlights:
Few-shot classification with foundation models enables building accurate classifiers with a few labeled training samples, but is vulnerable to data poisoning attacks.
Existing certified defenses are designed for traditional supervised learning and achieve sub-optimal performance when extended to few-shot classification.
FCert is based on two key observations: 1) the feature vector of a testing input is close to those of support samples with the same label, and 2) under data poisoning attacks, most support samples remain unaffected.
FCert computes a robust distance between the testing input and support samples of each class by removing the largest and smallest distances. It then predicts the class with the smallest robust distance.
FCert provably predicts the same label for a testing input under arbitrary data poisoning attacks when the total number of poisoned support samples is bounded. The authors derive the certified poisoning size and prove its tightness.
Extensive experiments on benchmark datasets and foundation models demonstrate that FCert is accurate, robust, and efficient compared to state-of-the-art few-shot classification methods and certified defenses.
Estadísticas
The content does not provide any specific metrics or figures. It focuses on the technical details of the proposed FCert defense.
Citas
The content does not contain any striking quotes.