TY - GEN
T1 - DMSformer
T2 - 9th International Conference on Smart Internet of Things, SmartIoT 2025
AU - Kawano, Shotaro
AU - Kawahara, Takayuki
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - With the widespread adoption of IoT systems such as smart homes and smart cities, there is a growing demand for lightweight models that enable low-latency, low-power, and high-accuracy time-series forecasting on edge devices. Conventional Transformer-based models achieve high forecasting accuracy through self-attention, but their large computational requirements and parameter count make them unsuitable for real-time deployment on resource-constrained IoT devices and edge devices. In this study, we propose Decomposition and Multi-Scale Transformer (DMSformer), a lightweight time-series forecasting model that does not use self-attention but instead combines decomposition layer Multi-Scale Blocks (MSB), designed to capture several time-series patterns, and Feed-Forward Networks (FFN). DMSformer captures the unique characteristics of each series (intra-series) as well as the dependencies between variables (inter-series) while reducing the computation cost, measured in Floating Point Operations (FLOPs), and the number of model parameters. Comparative experiments on five real-world datasets show that, compared to state-of-the-art Transformer-based models, DMSformer achieved the best forecasting accuracy on two datasets and delivered competitive performance on the remaining three. DMSformer reduces FLOPs by up to 93.5%, parameter count by up to 86.0%, and memory usage by up to 61.2% during inference. This study opens up new possibilities for the design of high-efficiency time-series forecasting models.
AB - With the widespread adoption of IoT systems such as smart homes and smart cities, there is a growing demand for lightweight models that enable low-latency, low-power, and high-accuracy time-series forecasting on edge devices. Conventional Transformer-based models achieve high forecasting accuracy through self-attention, but their large computational requirements and parameter count make them unsuitable for real-time deployment on resource-constrained IoT devices and edge devices. In this study, we propose Decomposition and Multi-Scale Transformer (DMSformer), a lightweight time-series forecasting model that does not use self-attention but instead combines decomposition layer Multi-Scale Blocks (MSB), designed to capture several time-series patterns, and Feed-Forward Networks (FFN). DMSformer captures the unique characteristics of each series (intra-series) as well as the dependencies between variables (inter-series) while reducing the computation cost, measured in Floating Point Operations (FLOPs), and the number of model parameters. Comparative experiments on five real-world datasets show that, compared to state-of-the-art Transformer-based models, DMSformer achieved the best forecasting accuracy on two datasets and delivered competitive performance on the remaining three. DMSformer reduces FLOPs by up to 93.5%, parameter count by up to 86.0%, and memory usage by up to 61.2% during inference. This study opens up new possibilities for the design of high-efficiency time-series forecasting models.
KW - Deep learning
KW - Time-series forecasting
KW - Transformer
UR - https://www.scopus.com/pages/publications/105032184280
U2 - 10.1109/SmartIoT66867.2025.00058
DO - 10.1109/SmartIoT66867.2025.00058
M3 - Conference contribution
AN - SCOPUS:105032184280
T3 - Proceedings - 2025 IEEE International Conference on Smart Internet of Things, SmartIoT 2025
SP - 355
EP - 361
BT - Proceedings - 2025 IEEE International Conference on Smart Internet of Things, SmartIoT 2025
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 17 November 2025 through 20 November 2025
ER -