核心概念
This paper presents a comprehensive framework for evaluating and improving the robustness of trajectory prediction models for autonomous vehicles, covering various strategies and methods to address the challenges of overfitting, adversarial attacks, and natural perturbations.
摘要
The paper starts by formalizing the problem of trajectory prediction and defining the concept of robustness in this context. It then introduces a general framework that covers existing strategies for evaluating and improving the robustness of trajectory prediction models.
The evaluation strategies include:
- Test on sliced data (E.1): Assessing robustness against overfitting by testing the model on specific subsets of the data, such as safety-critical scenarios or geographical locations.
- Test on perturbed data (E.2): Evaluating the model's adversarial robustness by introducing perturbations to the test data.
The improvement strategies include:
- Slice the training set differently (I.1): Modifying the training data to prevent overfitting and improve the model's performance on specific scenarios.
- Add perturbed data to training set (I.2): Incorporating perturbations into the training data to enhance the model's robustness against spurious features.
- Change the model architecture (I.3): Modifying the model architecture to improve the extraction of relevant features and mitigate overfitting.
- Change the trained model (I.4): Refining the trained model post-training, such as through model compression techniques, to enhance its generalization capabilities.
The paper then provides detailed insights into specific methods for each of the evaluation and improvement strategies, including scenario importance sampling, geographic importance sampling, agent removal, geometric transformations, adversarial attacks/training, and contextual map perturbations.
Finally, the paper discusses recent developments in the field, highlights key research gaps, and suggests potential future research directions to advance the robustness of trajectory prediction models for autonomous vehicles.
統計資料
"Autonomous vehicles rely on accurate trajectory prediction to inform decision-making processes related to navigation and collision avoidance."
"Many deep learning models are known to be sensitive to small errors and susceptible to external attacks, potentially resulting in undesirable behavior and decreased performance."
"Robustness against spurious features in trajectory prediction can be categorized into adversarial and natural robustness."
"Assessing the resistance to overfitting can be measured through the train/test split, where the key is to determine whether the training and test data are identical and independent samples from the same data distribution."
引述
"Robustness against spurious features in trajectory prediction can be categorized into adversarial and natural robustness."
"Assessing the resistance to overfitting can be measured through the train/test split, where the key is to determine whether the training and test data are identical and independent samples from the same data distribution."