Synthesis of training samples for the Online Sequential Extreme Learning Machine and application in load forecasting

Main Article Content

Charnon Chupong
Boonyang Plangklang

Abstract

Online Sequential Extreme Learning Machine (OS-ELM) is a model capable of incremental learning from newly received samples while working, but getting start with the OS-ELM requires sufficient and appropriate sample data for initial training. In some cases, finding such sample data is not possible. To address this issue, this article presents a synthesis of sample data for the initial training of the OS-ELM model. The proposed method is to take the first sample at the time of OS-ELM initialization and then add noise to transform it to be new sufficient samples. In this article, the authors have compared different formats of the noise used in the synthesis of the sample data. It was found that the Gaussian noise with properly selected the standard deviation value gives training samples that helped the OS-ELM forecast load with the most accuracy. In addition, the use of uniform noise allows the OS-ELM to have slightly lower accuracy than Gaussian noise but can be used without worrying about selecting the appropriate standard deviation value.


 

Article Details

Section
บทความวิจัย (Research Article)

References

A.Sato and K.Yamada, “Generalized learning vector quantization,” Proceedings of the 1995 Neural Information Processing Systems (NIPS), Denver, Colorado, USA, pp.423-429, 1995.

G.Cauwenberghs and T.Poggio, “Incremental and decremental support vector machine learning,” Advance Neural Information Processing Systems, vol. 13, pp.388-394, 2001.

R.Polikar, L.Upda, S.Upda and V.Honavar, “Learn++: an incremental learning algorithm for supervised neural networks,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol.31, no.4, pp.497-508, 2001

T. Zhang, “Solving large scale linear prediction problems using stochastic gradient descent algorithms,” Proceedings of the Twenty-First International Conference on Machine Learning, Banff, Alberta, Canada, ACM, pp.116-123, 2004

A.Saffari, C.Leistner, J.Santner, M.Godec and H.Bischof, “On-line random forests,” Proceedings of the 2009 IEEE Twelfth International Conference on Computer Vision Workshops, Kyoto, Japan, pp.1393-1400, 2009.

N.Y. Liang, G.B. Huang, P.Saratchandran and N.Sundarrajan, “A fast and accurate online sequential learning algorithm for feedforward networks.” IEEE Transaction on Neural Networks, vol.17, no.6, pp.1411-1423, November 2006.

Y. Liu, W. Cao, Y. Liu and W. Zou, "A Novel Ensemble Learning Method for Online Learning Scenarios," 2021 IEEE 4th International Conference on Electronics Technology (ICET), pp. 1137-1140, 2021.

H. Shi, M. Xu and R. Li, "Deep Learning for Household Load Forecasting—A Novel Pooling Deep RNN," in IEEE Transactions on Smart Grid, vol. 9, no. 5, pp. 5271-5280, Sept. 2018, doi: 10.1109/TSG.2017.2686012.

E. Yang and C. -H. Youn, "Temporal Data Pooling with Meta-Initialization for Individual Short-Term Load Forecasting," in IEEE Transactions on Smart Grid, doi: 10.1109/TSG.2022.3225805.

Xiaodong Shen, Houxiang Zhao, Yue Xiang, Peng Lan, Junyong Liu, “Short-term electric vehicles charging load forecasting based on deep learning in low-quality data environments,” Electric Power Systems Research, Volume 212, 2022, doi: 10.1016/j.epsr.2022.108247.

Hao Chen, Yngve Birkelund, Qixia Zhang, “Data-augmented sequential deep learning for wind power forecasting,” Energy Conversion and Management, Volume 248, 2021, doi: /10.1016/j.enconman.2021.114790.

C. Chupong and B.Plangklang, “Incremental learning model for load forecasting without training sample,” CMC-Computers Materials & Continua, vol.72, no3, pp.5415-5427, 2022

M. Schwabacher, H. Hirsh and T. Ellman, "Learning prototype-selection rules for case-based iterative design," Proceedings of the Tenth Conference on Artificial Intelligence for Applications, pp. 56-62, 1994.

Y. Freund and R.E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of Computer and System Sciences, vol. 55, no. 1, pp.119-139, 1997

G.B. Huang, Q.Y. Zhu and C.K. Siew, “Extreme learning machine: a new learning scheme of feedforward neural networks,” 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541), Budapest, vol.2, pp.985-990, 2004.

Andrew ng, Machine learning specialization, Coursera, [Online]. Available: https://coursera.org/specializations/machine-learning-introduction

X. Liu, S. Lin, J. Fang and Z. Xu, “Is extreme learning machine feasible? a theoretical assessment (part I),” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 1, pp. 7-20, 2015.

S. Lin, X. Liu, J. Fang and Z. Xu, “Is extreme learning machine feasible? A theoretical assessment (Part II)”, IEEE Trans. Neural Networks Learn. Syst., vol. 26, no. 1, pp. 21-34, 2015.

V. Klema and A. Laub, “The singular value decomposition: Its computation and some applications,” IEEE Transactions on Automatic Control, vol. 25, no. 2, pp. 164-176, April 1980.

W. Deng, Q. Zheng and L. Chen, “Regularized extreme learning machine,” 2009 IEEE Symposium on Computational Intelligence and Data Mining, Nashville, TN, USA, pp. 389-395, 2009.

R.Mulla, Hourly Energy Consumption., Kaggle, [Online]. Available: https://kaggle.com/ robikscube/hourly-energy-consumption, 2018.

Surakhi O, Zaidan M.A, Fung P.L, Hossein Motlagh N, Serhan S, AlKhanafseh M, Ghoniem R.M, and Hussein T, “Time-Lag Selection for Time-Series Forecasting Using Neural Network and Heuristic Algorithm,” Electronics 2021, 10, 2518, doi: 10.3390/electronics10202518.