Kernekoncepter
Transferring knowledge from related source tasks can significantly accelerate the learning of safety-constrained target tasks, enabling global exploration of multiple disjoint safe regions.
Resumé
The paper proposes a transfer safe sequential learning framework to facilitate real-world experiments that require respecting unknown safety constraints. The key ideas are:
- Modeling the source and target tasks jointly as multi-output Gaussian processes (GPs) to leverage correlated knowledge from the source task.
- Introducing a modularized approach to multi-output GPs that can alleviate the computational burden of incorporating source data, making the method more practical for real-world applications.
The paper first analyzes the local exploration problem of conventional safe learning methods, showing that they are limited to the neighborhood of the initial observations due to the properties of common stationary kernels. The proposed transfer learning approach can explore beyond this local region by incorporating guidance from the source task.
Empirically, the transfer safe learning methods demonstrate several benefits:
- Faster learning of the target task with lower data consumption
- Ability to globally explore multiple disjoint safe regions under the guidance of source knowledge
- Comparable computation time to conventional safe learning methods, thanks to the modularized GP approach.
The paper also discusses the limitations of the proposed method, such as the requirement for accurate source-relevant hyperparameters and the reliance on multi-task correlation between the source and target tasks.
Statistik
The safe set coverage (true positive area) of the proposed transfer learning methods is significantly larger than the baseline, indicating their ability to explore more of the safe space.
The false positive area (unsafe regions identified as safe) is smaller for the transfer learning methods, showing their improved safety modeling.
The root mean squared error (RMSE) of the target function prediction drops faster for the transfer learning methods, demonstrating their data efficiency.
Citater
"Transfer learning can be achieved by considering the source and target tasks jointly as multi-output GPs (Journel & Huijbregts, 1976; Álvarez et al., 2012)."
"We further modularize the multi-output GPs such that the source relevant components can be pre-computed and fixed. This alleviates the complexity of multi-output GPs while the benefit is retained."