The paper investigates the use of data augmentation techniques to improve the generalization of imitation learning (IL) agents in game environments. The authors first collect a set of demonstrations using a training environment, where an agent must navigate to a building, press a button to open a door, and enter the goal position. They then train IL agents using the original dataset as well as augmented datasets, applying various combinations of data augmentations such as Gaussian noise, uniform noise, scaling, state mixup, continuous dropout, and semantic dropout.
The authors evaluate the trained agents in four distinct test environments, which differ from the training environment in terms of goal position, obstacle placement, and overall complexity. The results show that certain combinations of data augmentations can significantly improve the generalization performance of the IL agents, with the best-performing models achieving a 60% improvement over the baseline non-augmented model. The authors identify scaling, state mixup, and continuous dropout as the most consistently effective augmentations across the test environments.
The paper provides valuable insights into the use of data augmentation to address the generalization challenge in game AI, particularly for imitation learning agents. The comprehensive evaluation across multiple test environments demonstrates the potential of this approach to enhance the robustness and adaptability of game-playing agents.
Egy másik nyelvre
a forrásanyagból
arxiv.org
Mélyebb kérdések