Core Concepts
Learning to embed time series patches independently results in superior time series representations compared to capturing patch dependencies.
Abstract
The paper introduces the concept of patch independence for time series representation learning.
Proposes a novel method, Patch Independence for Time Series (PITS), focusing on embedding patches independently.
Utilizes simple patch reconstruction and patch-wise MLP architecture for better efficiency and performance.
Introduces complementary contrastive learning to capture adjacent time series information efficiently.
Outperforms state-of-the-art Transformer-based models in forecasting and classification tasks with improved efficiency.
Stats
"Our proposed method improves time series forecasting and classification performance compared to state-of-the-art Transformer-based models."
"Code is available at this repository: https://github.com/seunghan96/pits."