CATS introduces a method to construct Auxiliary Time Series (ATS) that effectively represents and incorporates inter-series relationships for forecasting, achieving state-of-the-art results in Multivariate Time Series Forecasting (MTSF).
CATS method enhances multivariate time series forecasting by generating Auxiliary Time Series to represent inter-series relationships.
MG-TSD 모델은 다중 단위 시계열 확산 모델로, 시간 순서를 안내하는 과정을 통해 예측 성능을 향상시킵니다.
Normalization methods applied in the time domain can obscure important frequency-specific patterns in time series data; FredNormer proposes a novel approach by normalizing in the frequency domain, leading to more robust and accurate forecasting, especially for non-stationary time series.
SAMBA, a novel deep learning model for long-term time series forecasting, achieves state-of-the-art performance by simplifying the Mamba architecture and introducing a disentangled encoding strategy to effectively capture order, semantic, and cross-variate dependencies in time series data.
Incorporating time differences between observations directly into the kernel learning process significantly improves the accuracy of forecasting irregularly sampled time series data from dynamical systems.
Deep learning models for time series forecasting often struggle with non-stationary data; GAS-Norm improves performance by combining a Generalized Autoregressive Score (GAS) model with deep neural networks to adaptively normalize input data and denormalize predictions.
Non-stationarity in time series data requires a two-pronged approach: mitigating its impact on short-term modeling while leveraging it for long-term dependency modeling.
Timer-XL, a generative Transformer model, achieves state-of-the-art time series forecasting by leveraging long contexts and a novel attention mechanism called TimeAttention to capture complex temporal and variable dependencies.
TimeCNN, a novel deep learning model for time series forecasting, excels by capturing dynamic, multifaceted cross-variable correlations at each time point using a timepoint-independent convolutional approach, outperforming existing Transformer-based models in accuracy and efficiency.