References#

[1]

Wenjie Du, David Cote, and Yan Liu. SAITS: Self-Attention-based Imputation for Time Series. Expert Systems with Applications, 219:119619, 2023. URL: https://arxiv.org/abs/2202.08516, doi:10.1016/j.eswa.2023.119619.

[2]

Yong Liu, Tengge Hu, Haoran Zhang, Haixu Wu, Shiyu Wang, Lintao Ma, and Mingsheng Long. iTransformer: Inverted Transformers Are Effective for Time Series Forecasting. In The Twelfth International Conference on Learning Representations. 2024. URL: https://openreview.net/forum?id=JePfAI8fah.

[3]

Kun Yi, Qi Zhang, Wei Fan, Shoujin Wang, Pengyang Wang, Hui He, Ning An, Defu Lian, Longbing Cao, and Zhendong Niu. Frequency-domain mlps are more effective learners in time series forecasting. In A. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, 76656–76679. Curran Associates, Inc., 2023. URL: https://proceedings.neurips.cc/paper_files/paper/2023/file/f1d16af76939f476b5f040fd1398c0a3-Paper-Conference.pdf.

[4]

Yong Liu, Chenyu Li, Jianmin Wang, and Mingsheng Long. Koopa: learning non-stationary time series dynamics with koopman predictors. In A. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems, volume 36, 12271–12290. Curran Associates, Inc., 2023. URL: https://proceedings.neurips.cc/paper_files/paper/2023/file/28b3dc0970fa4624a63278a4268de997-Paper-Conference.pdf.

[5]

Yunhao Zhang and Junchi Yan. Crossformer: transformer utilizing cross-dimension dependency for multivariate time series forecasting. In The Eleventh International Conference on Learning Representations. 2023. URL: https://openreview.net/forum?id=vSVLM2j9eie.

[6]

Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis. In The Eleventh International Conference on Learning Representations. 2023. URL: https://openreview.net/forum?id=ju_Uqw384Oq.

[7]

Yuqi Nie, Nam H Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. A time series is worth 64 words: long-term forecasting with transformers. In The Eleventh International Conference on Learning Representations. 2023. URL: https://openreview.net/forum?id=Jbdc0vTOcol.

[8]

Gerald Woo, Chenghao Liu, Doyen Sahoo, Akshat Kumar, and Steven Hoi. ETSformer: exponential smoothing transformers for time-series forecasting. In The Eleventh International Conference on Learning Representations. 2023. URL: https://openreview.net/forum?id=5m_3whfo483.

[9]

Huiqiang Wang, Jian Peng, Feihu Huang, Jince Wang, Junhui Chen, and Yifei Xiao. MICN: multi-scale local and global context modeling for long-term series forecasting. In The Eleventh International Conference on Learning Representations. 2023. URL: https://openreview.net/forum?id=zt53IDUR1U.

[10]

Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. Are transformers effective for time series forecasting? In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, 11121–11128. Jun. 2023. URL: https://ojs.aaai.org/index.php/AAAI/article/view/26317, doi:10.1609/aaai.v37i9.26317.

[11]

Abhimanyu Das, Weihao Kong, Andrew Leach, Shaan K Mathur, Rajat Sen, and Rose Yu. Long-term forecasting with tiDE: time-series dense encoder. Transactions on Machine Learning Research, 2023. URL: https://openreview.net/forum?id=pCbC3aQB5W.

[12]

Minhao LIU, Ailing Zeng, Muxi Chen, Zhijian Xu, Qiuxia LAI, Lingna Ma, and Qiang Xu. Scinet: time series modeling and forecasting with sample convolution and interaction. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, 5816–5828. Curran Associates, Inc., 2022. URL: https://proceedings.neurips.cc/paper_files/paper/2022/file/266983d0949aed78a16fa4782237dea7-Paper-Conference.pdf.

[13]

Yong Liu, Haixu Wu, Jianmin Wang, and Mingsheng Long. Non-stationary transformers: exploring the stationarity in time series forecasting. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, 9881–9893. Curran Associates, Inc., 2022. URL: https://proceedings.neurips.cc/paper_files/paper/2022/file/4054556fcaa934b0bf76da52cf4f92cb-Paper-Conference.pdf.

[14]

Tian Zhou, Ziqing MA, xue wang, Qingsong Wen, Liang Sun, Tao Yao, Wotao Yin, and Rong Jin. FiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, 12677–12690. Curran Associates, Inc., 2022. URL: https://proceedings.neurips.cc/paper_files/paper/2022/file/524ef58c2bd075775861234266e5e020-Paper-Conference.pdf.

[15]

Taesung Kim, Jinhee Kim, Yunwon Tae, Cheonbok Park, Jang-Ho Choi, and Jaegul Choo. Reversible instance normalization for accurate time-series forecasting against distribution shift. In International Conference on Learning Representations. 2022. URL: https://openreview.net/forum?id=cGDAkQo1C0p.

[16]

Shizhan Liu, Hang Yu, Cong Liao, Jianguo Li, Weiyao Lin, Alex X. Liu, and Schahram Dustdar. Pyraformer: low-complexity pyramidal attention for long-range time series modeling and forecasting. In International Conference on Learning Representations. 2022. URL: https://openreview.net/forum?id=0EXmFzUn5I.

[17]

Xiang Zhang, Marko Zeman, Theodoros Tsiligkaridis, and Marinka Zitnik. Graph-guided network for irregularly sampled multivariate time series. In International Conference on Learning Representations. 2022. URL: https://openreview.net/forum?id=Kwm8I7dU-l5.

[18]

Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin. FEDformer: frequency enhanced decomposed transformer for long-term series forecasting. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, 27268–27286. PMLR, 17–23 Jul 2022. URL: https://proceedings.mlr.press/v162/zhou22g.html.

[19]

Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. Autoformer: decomposition transformers with auto-correlation for long-term series forecasting. In Advances in Neural Information Processing Systems, volume 34, 22419–22430. Curran Associates, Inc., 2021. URL: https://proceedings.neurips.cc/paper_files/paper/2021/file/bcc0d400288793e8bdcd7c19a8ac0c2b-Paper.pdf.

[20]

YUSUKE TASHIRO, Jiaming Song, Yang Song, and Stefano Ermon. CSDI: conditional score-based diffusion models for probabilistic time series imputation. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems. 2021. URL: https://openreview.net/forum?id=VzuIzbRDrum.

[21]

Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI conference on artificial intelligence, volume 35, 11106–11115. 2021.

[22]

Xiaoye Miao, Yangyang Wu, Jun Wang, Yunjun Gao, Xudong Mao, and Jianwei Yin. Generative Semi-supervised Learning for Multivariate Time Series Imputation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10):8983–8991, May 2021. URL: https://ojs.aaai.org/index.php/AAAI/article/view/17086.

[23]

Qianli Ma, Chuxin Chen, Sen Li, and Garrison W. Cottrell. Learning Representations for Incomplete Time Series Clustering. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10):8837–8846, May 2021. URL: https://ojs.aaai.org/index.php/AAAI/article/view/17070.

[24]

Xinyu Chen and Lijun Sun. Bayesian Temporal Factorization for Multidimensional Time Series Prediction. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1–1, 2021. URL: http://arxiv.org/abs/1910.06366, arXiv:1910.06366, doi:10.1109/TPAMI.2021.3066551.

[25]

Defu Cao, Yujing Wang, Juanyong Duan, Ce Zhang, Xia Zhu, Congrui Huang, Yunhai Tong, Bixiong Xu, Jing Bai, Jie Tong, and Qi Zhang. Spectral temporal graph neural network for multivariate time-series forecasting. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, 17766–17778. Curran Associates, Inc., 2020. URL: https://proceedings.neurips.cc/paper_files/paper/2020/file/cdf6581cb7aca4b7e19ef136c6e601a5-Paper.pdf.

[26]

Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. Reformer: the efficient transformer. In International Conference on Learning Representations. 2020. URL: https://openreview.net/forum?id=rkgNKkHtvB.

[27]

Vincent Fortuin, Dmitry Baranchuk, Gunnar Rätsch, and Stephan Mandt. GP-VAE: Deep probabilistic time series imputation. In International conference on artificial intelligence and statistics, 1651–1661. PMLR, 2020.

[28]

Johann de Jong, Mohammad Asif Emon, Ping Wu, Reagon Karki, Meemansa Sood, Patrice Godard, Ashar Ahmad, Henri Vrooman, Martin Hofmann-Apitius, and Holger Fröhlich. Deep learning for clustering of multivariate clinical patient trajectories with missing values. GigaScience, 8(11):giz134, November 2019. URL: https://doi.org/10.1093/gigascience/giz134, doi:10.1093/gigascience/giz134.

[29]

Jinsung Yoon, William R. Zame, and Mihaela van der Schaar. Estimating missing data in temporal data streams using multi-directional recurrent neural networks. IEEE Transactions on Biomedical Engineering, 66(5):1477–1490, 2019. doi:10.1109/TBME.2018.2874712.

[30]

Wei Cao, Dong Wang, Jian Li, Hao Zhou, Lei Li, and Yitan Li. BRITS: Bidirectional Recurrent Imputation for Time Series. arXiv:1805.10572 [cs, stat], May 2018. URL: http://arxiv.org/abs/1805.10572, arXiv:1805.10572.

[31]

Zhengping Che, Sanjay Purushotham, Kyunghyun Cho, David Sontag, and Yan Liu. Recurrent Neural Networks for Multivariate Time Series with Missing Values. Scientific Reports, 8(1):6085, April 2018. URL: https://www.nature.com/articles/s41598-018-24271-9, doi:10.1038/s41598-018-24271-9.

[32]

Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL: https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf.

[33]

Donald B. Rubin. Inference and missing data. Biometrika, 63(3):581–592, 1976. URL: http://www.jstor.org/stable/2335739.

[34]

Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. In International Conference on Learning Representations. 2018. URL: https://openreview.net/forum?id=ryQu7f-RZ.

[35]

William M. Rand. Objective Criteria for the Evaluation of Clustering Methods. Journal of the American Statistical Association, 66(336):846–850, 1971. URL: https://www.jstor.org/stable/2284239, doi:10.2307/2284239.

[36]

Roderick J. A. Little. A Test of Missing Completely at Random for Multivariate Data with Missing Values. Journal of the American Statistical Association, 83(404):1198–1202, 1988. URL: https://www.jstor.org/stable/2290157, doi:10.2307/2290157.

[37]

Niels Bruun Ipsen, Pierre-Alexandre Mattei, and Jes Frellsen. Not-\MIWAE\: deep generative modelling with missing not at random data. In International Conference on Learning Representations. 2021. URL: https://openreview.net/forum?id=tu29GQT0JFy.