Concepts de base
パッチ間の依存関係を捉えるよりも、パッチを独立して埋め込むことが時系列表現学習において優れた結果をもたらす。
Stats
Masked time series modelingは自己教師あり表現学習戦略です。 - Inspired by masked image modeling in computer vision, recent works first patchify and partially mask out time series, and then train Transformers to capture the dependencies between patches by predicting masked patches from unmasked patches.
提案手法は時間系列予測と分類の性能向上に貢献します。 - Our proposed method improves time series forecasting and classification performance compared to state-of-the-art Transformer-based models.
提案手法は他手法よりも効率的です。 - Code is available at this repository: https://github.com/seunghan96/pits.