A Transfer Learning-based Deep Convolutional Neural Network Approach for White Shrimp Abnormality Classifcation
Main Article Content
Abstract
Shrimp transportation frequently leads to product damage, necessitating a sorting system to identify and remove compromised shrimp prior to processing. This research aims to develop a transfer learning-based deep convolutional neural network system capable of accurately categorizing shrimp into seven classes: Complete body, crunched head, head loss, head loss with remaining chin, cut tail, torn in half, and total crunched. A dataset comprising 405 color shrimp images, each with dimensions of 1,920 x 1,080 pixels, was augmented using geometric transformations to expand the dataset to 6,480 images. These augmented images were then employed to train four state-of-the-art transfer learning-based models (NasNetLarge, InceptionResNetV2, EfcientNetV2L, ConvNeXtXLarge) from Keras Applications. These models also were subsequently compared to a baseline CNN. Results demonstrate that the ConvNeXtXLarge model outperformed the others, achieving the highest accuracy (95%), precision (0.96%), recall (0.95%), and F1-score (0.95%), underscoring its superiority in shrimp damage classification. An analysis of misclassifications revealed potential confusion between certain damage classes, suggesting areas for future refinement to enhance the model’s ability to differentiate between similar types of damage.
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
เนื้อหาข้อมูล
References
Centre for Agricultural Information, Office of Agricultural Economics. (2024, Aug. 22). Thailand Foreign Agricultural Trade Statistics 2023. [Online]. Available: https://www.oae. go.th/assets/portals/1/fles/jounal/2567/tradestat2566.pdf
N. Masunee, S. Chaiprapat, and k. Waiyagan, “Development of an Image Processing System in Splendid Squid Quality Classification,” in Proc. ICDIP, 2013, pp. 252-259.
N. Thanasarn, S. Chaiprapat, k. Waiyakan et al., “Automated Discrimination of Deveined Shrimps Based on Grayscale Image Parameters,” J. Food Process Eng., vol. 42, no. 4, p. e13041, Mar. 2019.
P. H. Andersen, Sustainable Operations Management (SOM) Strategy and Management: An Introduction to Part I. Melbourne, AU: Palgrave Macmillan, 2019, pp. 15-25.
R. kler, G. Elkady, k. Rane et al., “Machine Learning and Artificial Intelligence in the Food Industry: A Sustainable Approach,” Journal of Food Quality, vol. 2022, no. 1, pp. 1-9, May 2022.
T. Valeeprakhon, k. Orkphol, and P. Chaihuadjaroen, “Deep Convolutional Neural Networks Based on VGG-16 Transfer Learning for Abnormalities Peeled Shrimp Classification,” International Scientific Journal of Engineering and Technology (ISJET), vol. 6, no. 2, pp. 13-23, Dec. 2022.
H. Yu, X. Liu, H. Qin et al., “Automatic Detection of Peeled Shrimp Based on Image Enhancement and Convolutional Neural Networks,” in Proc. The 8th International Conference on Computing and Artificial Intelligence, 2022, pp. 439-450.
Y. Zhang, C. Wei, Y. Zhong et al., “Deep Learning Detection of Shrimp Freshness Via Smartphone Pictures,” Journal of Food Measurement and Characterization, vol. 16, no. 5, pp. 3868-3876, Jun. 2022.
k. wang, C. Zhang, R. wang et al., “Quality Non-Destructive Diagnosis of Red Shrimp Based on Image Processing,” Journal of Food Engineering, vol. 357, p. 111648, Jul. 2023.
k. Prema and J. Visumathi, “An Improved Non-Destructive Shrimp Freshness Detection Method Based on Hybrid CNN and SVM with GAN Augmentation,” in Proc. 2022 International Conference on Advances in Computing, Communication and Applied Informatics (ACCAI), 2022, pp. 1-7.
Keras Team. (2024, August 22). Keras Applications. [Online]. Available: https://keras.io/api/applications/
B. Zoph, V. Vasudevan, J. Shlens et al., “Learning Transferable Architectures for Scalable Image Recognition,” in Proc. The IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8697-8710.
C. Szegedy, S. Ioffe, V. Vanhoucke et al., “Inception-v4, Inception-Resnet and the Impact of Residual Connections on Learning,” in Proc. The AAAI Conference on Artificial Intelligence, 2017. pp. 4278-4284.
M. Tan and Q. Le, “Efcientnetv2: Smaller Models and Faster Training,” in Proc. International Conference on Machine Learning, PMLR, 2021, pp. 10096-10106.
Z. Liu, H. Mao, C. Y. Wu et al., “A Convnet for the 2020s,” in Proc. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 11976-11986.
N. E. khalifa, M. Loey, and S. Mirjalili, “A Comprehensive Survey of Recent Trends in Deep Learning for Digital Images Augmentation,” Artif. Intell. Rev., vol. 55, no. 3, pp. 2351-2377, Sep. 2022.
Y. LeCun, Y. Bengio, and G. Hinton, “Deep Learning,” Nature, vol. 521, no. 7553, pp. 436-444, May. 2015.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet Classification with Deep Convolutional Neural Networks,” Adv. Neural Inf. Process. Syst., vol. 25, 2012, pp. 84-90.
I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, Massachusetts: MIT Press, 2016, pp. 1-800.
w. Rawat and Z. wang, “Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review,” Neural Comput., vol. 29, no. 9, pp. 2352-2449, Jun. 2017.
J. Gu, Z wang, J. Kuen et al., “Recent Advances in Convolutional Neural Networks,” Pattern Recognition, vol. 77, pp. 354-377, May. 2018, doi: https://doi.org/10.1016/ j.patcog.2017.10.013
S. J. Pan and Q. Yang, “A Survey on Transfer Learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345-1359, Oct. 2009.
k. Weiss, T. M. Khoshgoftaar, and D. wang, “A Survey of Transfer Learning,” Journal of Big Data, vol. 3, no. 1, p. 9, Dec. 2016, doi: 10.1186/s40537-016-0043-6
C. Tan, F. Sun, T. Kong et al., “A Survey on Deep Transfer Learning,” in Proc. 27th International Conference on Artificial Neural Networks, Rhodes, Greece, 2018, pp. 270- 279.
J. Deng, w. Dong, R. Socher et al., “Imagenet: A LargeScale Hierarchical Image Database,” in Proc. 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248-255.