The paper proposes TEMPO, a prompt-based generative pre-trained transformer for time series forecasting. TEMPO consists of two key components:
Modeling time series patterns: TEMPO decomposes the time series input into trend, seasonality, and residual components using locally weighted scatterplot smoothing (STL). Each component is then mapped to its corresponding hidden space to construct the time series input embedding.
Prompt-based adaptation: TEMPO utilizes a soft prompt to efficiently tune the GPT for forecasting tasks. The prompt encodes temporal knowledge of trend and seasonality, guiding the reuse of this information.
The authors conduct a formal analysis, bridging time series and frequency domains, to highlight the necessity of decomposing time series components. They also theoretically show that the attention mechanism alone may not be able to disentangle the trend and seasonal signals automatically.
Extensive experiments on benchmark datasets and two multimodal datasets (GDELT and TETS) demonstrate TEMPO's superior performance in zero-shot and multimodal settings, highlighting its potential as a foundational model for time series forecasting.
The paper also provides an interpretable framework for understanding the interactions among the input components using a generalized additive model (GAM) and SHAP values.
На другой язык
из исходного контента
arxiv.org
Дополнительные вопросы