TY - GEN
T1 - DDformer
T2 - 19th International Joint Symposium on Artificial Intelligence and Natural Language Processing, iSAI-NLP 2024
AU - Kawano, Shotaro
AU - Kawahara, Takayuki
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Recently, the large amounts of time series data generated by IoT devices are used for forecasting. Various multivariate time series forecasting models have been developed using deep learning models. Among them, Transformer-based models, which can extract long-term dependencies within sequences, have attracted significant attention. However, it is necessary for Transformers to effectively capture dependencies between multiple time series data. Additionally, simplifying the structure is required for implementation on IoT devices, and there is also a need to develop models that mitigate the impact of noise present in time series data. In this paper, we propose a Transformer-based model called DDformer to address these challenges. DDformer is designed to effectively capture both temporal and spatial dependencies in time series data. It decomposes inputs into trend and seasonal components using decomposition layers and enhances the features of each time step and variable with dimension expansion/reduction layers. When validated on energy, financial, and weather datasets, DDformer reduced prediction error by up to 45.9% compared to the state-of-the-art model (FEDformer).
AB - Recently, the large amounts of time series data generated by IoT devices are used for forecasting. Various multivariate time series forecasting models have been developed using deep learning models. Among them, Transformer-based models, which can extract long-term dependencies within sequences, have attracted significant attention. However, it is necessary for Transformers to effectively capture dependencies between multiple time series data. Additionally, simplifying the structure is required for implementation on IoT devices, and there is also a need to develop models that mitigate the impact of noise present in time series data. In this paper, we propose a Transformer-based model called DDformer to address these challenges. DDformer is designed to effectively capture both temporal and spatial dependencies in time series data. It decomposes inputs into trend and seasonal components using decomposition layers and enhances the features of each time step and variable with dimension expansion/reduction layers. When validated on energy, financial, and weather datasets, DDformer reduced prediction error by up to 45.9% compared to the state-of-the-art model (FEDformer).
KW - Multivariate time series forecasting
KW - Transformer
UR - http://www.scopus.com/inward/record.url?scp=85216530879&partnerID=8YFLogxK
U2 - 10.1109/iSAI-NLP64410.2024.10799493
DO - 10.1109/iSAI-NLP64410.2024.10799493
M3 - Conference contribution
AN - SCOPUS:85216530879
T3 - 19th International Joint Symposium on Artificial Intelligence and Natural Language Processing, iSAI-NLP 2024
BT - 19th International Joint Symposium on Artificial Intelligence and Natural Language Processing, iSAI-NLP 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 11 November 2024 through 15 November 2024
ER -