The article introduces a new multifidelity training approach for scientific machine learning, leveraging data of varying fidelities and costs. It addresses the challenge of learning accurate surrogate models from scarce high-fidelity data. By combining high- and low-fidelity data, the proposed method reduces model variance and improves accuracy. Theoretical analyses guarantee the approach's accuracy and robustness to small training budgets. Numerical results confirm the effectiveness of multifidelity learned models in achieving lower model variance than standard models trained on only high-fidelity data.
翻譯成其他語言
從原文內容
arxiv.org
深入探究