The study pioneers in systematically evaluating the impact of source-target similarity and source diversity on zero-shot and fine-tuned transfer learning performance in time series forecasting.
The authors pre-train the DeepAR model on five public source datasets of different domains and sizes, as well as a concatenation of all these sets (Multisource model). They apply the pre-trained models in a zero-shot and fine-tuned manner to forecast five target datasets, including real-world wholesale sales data.
The authors analyze the data using two feature sets to quantify similarities between sources and targets, as well as source data diversity. They find that:
The Multisource and M4 source models achieve the best transfer learning accuracy. Fine-tuning generally enhances performance, except for the Multisource and M4 models. Pre-trained models also provide better bias than scratch and benchmark models. Uncertainty estimation is best for the Multisource model, and fine-tuning usually improves it.
In most cases, fine-tuning a pre-trained model is faster than training from scratch, with the M4 model being the fastest to fine-tune.
Til et annet språk
fra kildeinnhold
arxiv.org
Viktige innsikter hentet fra
by Claudia Ehri... klokken arxiv.org 04-10-2024
https://arxiv.org/pdf/2404.06198.pdfDypere Spørsmål