Core Concepts
Adversarial examples can be crafted by attaching external objects from pre-collected data to target images, enabling successful latency attacks against black-box object detection models without any prior knowledge about the target model.
Abstract
This paper presents a novel "steal now, attack later" approach to evaluate the feasibility of latency attacks against black-box object detection models. The key idea is to craft adversarial examples by attaching external objects from pre-collected data to target images, exploiting the vulnerability of object detection models to generate ghost objects.
The authors first demonstrate the data collection process, where they gather a diverse set of objects from public datasets like MS COCO and Open Images. They then propose a position-centric algorithm to carefully determine the placement of these external objects on the target image, aiming to maximize the number of ghost objects detected by the victim model.
To project the perturbations onto the given epsilon ball, the authors introduce a color manipulation algorithm that refines the perturbations by shrinking the amplitudes and adjusting the average over specific regions while maintaining the same predictions.
The experimental results show that the proposed attack achieves successful attacks across various commonly used object detection models, including Faster-RCNN, Retinanet, FCOS, DERT, and YOLO, as well as the Google Vision API. The attack success rates range from 0% to 83%, depending on the target model and the epsilon radius. The authors also conduct ablation studies to analyze the impact of the number of collected images and different data collection configurations.
Furthermore, the authors estimate the total cost and time consumption of each attack, which is less than $1 per attack, posing a significant threat to AI security. The findings encourage attackers to invest in improving attack algorithms to exploit vulnerabilities in AI systems.
Stats
The average cost of each attack is less than $1 dollars.
Quotes
"Adversarial examples crafted in this approach can be used to exploit vulnerabilities present in AI services."
"Deploying a private model locally as the most economical solution, supported by affordable costs associated with the proposed attack."