This paper presents a novel approach to the problem of robot object search in large, unstructured environments. The key contributions are:
Formulating the search problem as a belief Markov decision process with options (BMDP-O). This allows the agent to consider sequences of actions (options) to move between regions of interest, enabling more efficient scaling to large environments.
Introducing an approximate "lite" formulation of the BMDP-O problem, which approximates the belief updates to an MDP-O. This achieves similar search times but with faster computation.
Enabling the use of customizable fields of view, allowing adaptation across multiple sensor types.
The BMDP-O formulation outperforms baseline approaches like greedy search and direct policy search in terms of search time and consistency. The BMDP-O Lite formulation further improves computational efficiency, though with a slight increase in search time compared to the full BMDP-O model. The results demonstrate the benefits of the proposed approach in large, unstructured environments, especially when considering different sensor configurations.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Matthew Coll... at arxiv.org 04-08-2024
https://arxiv.org/pdf/2404.04186.pdfDeeper Inquiries