The Study of Al- generated Images on the Efficacy of Lightweight Pre-trained Neural Networks in Flower Classification

Authors

  • Shutchon Premchaisawatt Lecturer, Department of Industrial Technology, Faculty of Industrial Education, Rajamangala University of Technology Isan, Khon Kaen Campus
  • Krit Trashoo Lecturer, Department of Industrial Technology, Faculty of Industrial Education, Rajamangala University of Technology Isan, Khon Kaen Campus
  • Narong Boonsaner Lecturer, Department of Industrial Technology, Faculty of Industrial Education, Rajamangala University of Technology Isan, Khon Kaen Campus

Keywords:

Image classification, Pre-trained neural network, Flower classification, Al-generated images

Abstract

This research investigates the efficacy of utilizing a compact, pre-trained neural network model on a limited dataset for the categorization of five distinct flower types: daisy, tulip, rose, sunflower, and dandelion. The investigation incorporates three distinct sets of training data: real images (genuine photographic images), images generated using artificial intelligence from DELL-E2, and a hybrid dataset merging real images and AI-generated images. The outcomes of the experimentation reveal that among lightweight pre-trained models, ResNET18, ResNET30, and ResNET50. ResNET50 which is the most extensive pre-trained model, demonstrates superior performance across various evaluation metrics, including accuracy, precision, recall, and the F1-score. The models trained with AI-generated images combined with real images show results better than those trained with only real images or only AI-generated images. Because the combination can fill in missing details from generated images, introduce creative elements, and potentially improve the overall visual impact and informativeness of the image. The ResNET50 provides explicitly better than both lighter architecture model (ResNET18 and ResNET30). Therefore, in cases where there is limited image data or where a model needs to be quickly built, the use of AI-generated images in conjunction with collected data is a promising approach.

References

Fragapane G, Ivanov D, Peron M, Sgarbossa F, Strandhagen JO. Increasing flexibility and productivity in Industry 4.0 production networks with autonomous mobile robots and smart intralogistics. Annals of operations research. 2022; 308(1-2): 125-143.

Li L, Foo MJ, Chen J, Tan KY, Cai J, Swaminathan R, et al. Mobile Robotic Balance Assistant (MRBA): a gait assistive and fall intervention robot for daily living. Journal of NeuroEngineering and Rehabilitation. 2023; 20(1): 1-7.

Moru DK. The effects of camera focus and sensor exposure on accuracy using computer vision. Nigerian Journal of Technology. 2022; 41(3): 585-590.

Amugongo LM, Kriebitz A, Boch A, Lütge C. Mobile computer vision-based applications for food recognition and volume and calorific estimation: A systematic review. Healthcare. 2022; 11(1): 59.

Koubâa A, editor. Robot Operating System (ROS). Switzerland: Springer; 2017.

Xaud MF, Leite AC, From PJ. Thermal image based navigation system for skid-steering mobile robots in sugarcane crops. In: International Conference on Robotics and Automation (ICRA) 2019; 2019 May 20; Montreal, QC, Canada.

Ahlawat S, Choudhary A, Nayyar A, Singh S, Yoon B. Improved handwritten digit recognition using convolutional neural networks (CNN). Sensors. 2020; 20(12): 3344.

He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition; 2016 Jun 27-30; Las Vegas, USA: IEEE; 2016.

Jena B, Nayak GK, Saxena S. Convolutional neural network and its pretrained models for image classification and object detection: A survey. Concurrency and Computation: Practice and Experience. 2022; 34(6): e6767.

Marcus G, Davis E, Aaronson S. A very preliminary analysis of DALL-E 2. arXiv preprint. arXiv:2204.13807. 2022; 1-14.

Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X. Improved techniques for training GANs. Proceedings of the 30th International Conference on Neural Information Processing Systems 2016; 2016 Dec 5-10; New York, USA: ACM; 2016.

Besnier V, Jain H, Bursuc A, Cord M, Perez P. This dataset does not exist: training models from generated images. Proceedings of ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); 2020 May 4-8; Barcelona, Spain; 2020.

Li X, Duan C, Yin P, Wang N. Pedestrian Re-identity Based on ResNet Lightweight Network. InJournal of Physics: Conference Series. 2021; 2083(3), 1-6.

Gong J, Liu W, Pei M, Wu C, Guo L. ResNet10: A lightweight residual network for remote sensing image classification. Proceedings of the 14th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA) 2022; 2022 Jan 15-16; Changsha, China; 2022.

Mamaev A. Flowers recognition [Internet]. 2021 [cited 2023 Jul 14]. Available from: https://www.kaggle.com/datasets/alxmamaev/flowers-recognition

Junker M, Hoch R, Dengel A. On the evaluation of document analysis components by recall, precision, and accuracy. Proceedings of the Fifth International Conference on Document Analysis and Recognition. ICDAR'99); 1999 Sep 20-22; Bangalore, India; 1999.

Kohavi R, Provost F. Confusion matrix. Machine learning. 1998 Feb;30(2-3):271-4.

Dall·E 2 [Internet]. [cited 2023 Jul 14]. Available from: https://openai.com/dall-e-2

Resnet¶ [Internet]. [cited 2023 Jul 14]. Available from: https://pytorch.org/vision/main/models/resnet.html

Jetson Nano Developer Kit [Internet]. 2022 [cited 2023 Aug 14]. Available from: https://developer.nvidia.com/embedded/jetson-nano-developer-kit

Downloads

Published

2024-10-08

Issue

Section

บทความวิจัย