Dense-YOLO: A Lightweight Weed Detection Platform Based on MSRCP

Main Article Content

MingYuan Wang
Watis Leelapatra

Abstract

For real-time weed detection needs and the flexibility of deploying model in embedded devices. We proposed a lightweight object detection platform, named Dense-YOLO which is based on Multi-scale retinex with chromaticity preservation(MSRCP) and  YOLOv4 architecture. First, we use MSRCP to preprocess original images to provide a foundation for subsequent feature extraction. Second, Depthwise separable convolution(DSC) is used to reduce parameters, makes it suitable for developing on embedded devices. Third, we used K-means++ to optimize the clustering of anchor size. Fourth, DenseNet-121, PANet and SPP modules together constitute Dense-YOLO. Last, we analyze the effectiveness of focal loss. Compared with YOLOv4, mAP is improved by 7.26%, three-quarters of the parameters are removed and 6.1 higher in FPS.

Article Details

How to Cite
Wang, M., & Leelapatra, W. . (2023). Dense-YOLO: A Lightweight Weed Detection Platform Based on MSRCP. INTERNATIONAL SCIENTIFIC JOURNAL OF ENGINEERING AND TECHNOLOGY (ISJET), 7(2), 23–33. Retrieved from https://ph02.tci-thaijo.org/index.php/isjet/article/view/247213
Section
Research Article

References

D. D. Patel and B. A. Kumbhar, “Weed and its Management: A Major Threats to Crop Economy,” J. Pharm. Sci. Biosci. Res, vol. 6, pp. 453-758, 2016.

L. Wen, X. Liming, and X. Jiejie, “Research Status of Mechanical Intra-Row Weed Control in Row Crops,” Journal of Agricultural Mechanization Research, vol. 39, pp. 243- 250, 2017.

B. Liu and R. Bruch, “Weed Detection for Selective Spraying: A Review,” Current Robotics Reports, vol. 1, pp. 19-26, Mar.

S. Shanmugam, E. Assunção, R. Mesquita et al., “Automated Weed Detection Systems: A Review,” KnE Engineering, pp. 271-284, Jun. 2020.

C. Pulido, L. Solaque, and N. Velasco, “Weed Recognition by SVM Texture Feature Classifcation in Outdoor Vegetable Crop Images,” Ingeniería e Investigación, vol. 37, pp. 68-74, Apr. 2017.

S. R. Gunn, “Support Vector Machines for Classifcation and Regression,” ISIS Technical Report, vol. 14, pp. 5-16, May. 1998.

Y. Lecun, L. Bottou, Y. Bengio et al., “Gradient-Based Learning Applied to Document Recognition,” in Proc. IEEE, pp. 2278-2324, 1998.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classifcation with Deep Convolutional Neural Networks,” Communications of the ACM, vol. 60, pp. 84-90, May. 2017.

K. Simonyan and A. Zisserman. (2014, Sep. 4). Very Deep Convolutional Networks for Large-Scale Image Recognition. [Online]. Available: https://arxiv.org/abs/1409.1556

C. Szegedy, W. Liu, Y. Jia et al., “Going Deeper with Convolutions,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1-9.

K. He, X. Zhang, S. Ren et al., “Deep Residual Learning for Image Recognition,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770-778.

P. Viola and M. J. Jones, “Robust Real-Time Face Detection,” International Journal of Computer Vision, vol. 57, pp. 137-154, May. 2004.

N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection,” in Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005, pp. 886-893.

P. F. Felzenszwalb, R. B. Girshick, D. McAllester et al., “Object Detection with Discriminatively Trained Part-Based Models,” IEEE transactions on Pattern Analysis and Machine Intelligence, vol. 32, pp. 1627-1645, Sep. 2010.

K. He, X. Zhang, S. Ren et al., “Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, pp. 1904-1916, Jan. 2015.

S. Ren, K. He, R. Girshick et al., “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” Advances in Neural Information Processing Systems, Jun. 2015.

R. Girshick, “Fast R-Cnn,” in Proc. IEEE International Conference on Computer Vision, 2015, pp. 1440-1448.

C. Fu, W. Liu, A. Ranga et al. (2017, Jan. 23). DSSD:Deconvolutional Single Shot Detector. [Online]. Available: https://arxiv.org/abs/1701.06659

Z. Li and F. Zhou. (2017, Dec. 4). FSSD: Feature Fusion Single Shot Multibox Detector. [Online]. Available: https:// arxiv.org/abs/1712.00960

J. Jeong, H. Park, and N. Kwak. (2017, May. 26). Enhancement of SSD by Concatenating Feature Maps for Object Detection. [Online]. Available: https://arxiv.org/abs/1705.09587

W. Liu, D. Anguelov, D. Erhan et al., “SSD: Single Shot Multibox Detector,” European Conference on Computer

Vision, pp. 21-37, Otc. 2016.

J. Redmon, S. Divvala, R. Girshick et al., “You Only Look Once: Unifed, Real-Time Object Detection,” in Proc. IEEE

Conference on Computer Vision and Pattern Recognition, 2016, pp. 779-788.

J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger,” in Proc. IEEE Conference on Computer Vision

and Pattern Recognition, 2017, pp. 7263-7271.

J. Redmon and A. Farhadi. (2018, Apr. 6). YOLOv3: An Incremental Improvement. [Online]. Available: https://arxiv.

org/abs/1804.02767

A. Bochkovskiy, C. Wang, and H. M. Liao. (2020, Apr. 23). YOLOv4: Optimal Speed and Accuracy of Object Detection.

[Online]. Available: https://arxiv.org/abs/2004.10934

S. M. Sharpe, A. W. Schumann, and N. S. Boyd, “Goosegrass Detection in Strawberry and Tomato Using a Convolutional

Neural Network,” Scientifc Reports, vol. 10, pp. 1-8, Jun. 2020.

J. Gao, A. P. French, M. P. Pound et al., “Deep Convolutional Neural Networks for Image-Based Convolvulus Sepium

Detection in Sugar Beet Fields,” Plant Methods, vol. 16,pp. 1-12, Dec. 2020.

Z. Rahman, D. J. Jobson, and G. A. Woodell, “Multi-Scale Retinex for color image enhancement,” in Proc. 3rd IEEE

International Conference on Image Processing, 1996, pp. 1003-1006.

A. B. Petro, C. Sbert, and J. Morel, “Multiscale Retinex,” Image Processing on Line, vol. 4, pp. 71-88, Apr. 2014.

D. Arthur and S. Vassilvitskii, “k-means++: The Advantages of Careful Seeding,” in Proc. Symp. Discrete Algorithms,

, pp. 1027-1035.

T. Y. Lin, P. Goyal, R. Girshick et al., “Focal Loss for Dense Object Detection,” in Proc. IEEE International Conference

on Computer Vision, 2017, pp. 318-327.

E. H. Land, and J. J. McCann, “Lightness and Retinex Theory,” Journal of the Optical Society of America, vol. 61,

no. 1, pp. 1-11, Jan. 1971.

G. Huang, Z. Liu, L. Van Der Maaten et al., “Densely Connected Convolutional Networks,” in Proc. IEEE

Conference on Computer Vision and Pattern Recognition, 2017, pp. 4700-4708.

D. Wu, S. Lv, M. Jiang et al., “Using Channel Pruning-Based YOLO v4 Deep Learning Algorithm for the Real-Time and

Accurate Detection of Apple Flowers in Natural Environments,” Computers and Electronics in Agriculture, vol. 178,

p. 105742, Nov. 2020.

H. Dong, X. Chen, H. Sun et al., “Weed Detection in Vegetable Field Based on Improved YOLOv4 and Image Processing,”

Journal of Graphics, vol. 43, pp. 559-569, Mar. 2022.

X. Che and H. Chen, “Muti-Object Dishes Detection Algorithm Based on Improved YOLOv4,” Journal of Jilin University,

pp. 1-7, Nov. 2021.

K. Zuiderveld, “Contrast Limited Adaptive Histogram Equalization,” Graphics Gems, pp. 474-485, May. 1994.

Y. Zhang and M. Xie, “Colour Image Enhancement Algorithm Based on HSI and Local Homomorphic Filtering,”

Computer Applications and Software, vol. 30, pp. 303-307, Dec. 2013.

S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,”

in Proc. International Conference on Machine Learning, 2015, pp. 448-456.

X. Glorot, A. Bordes, and Y. Bengio, “Deep Sparse Rectifer Neural Networks,” in Proc. the Fourteenth International

Conference on Artifcial Intelligence and Statistics, 2011, pp. 315-323.

A. G. Howard, M. Zhu, B. Chen et al. (2017, Apr. 17). MobileNets: Efcient Convolutional Neural Networks for

Mobile Vision Applications. [Online]. Available: https://arxiv.org/abs/1704.04861

M. Sandler, A. Howard, M. Zhu et al., “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” in Proc. IEEE

Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510-4520.

A. Howard, M. Sandler, G. Chu, et al., “Searching for MobileNetV3,” in Proc. IEEE/CVF International Conference

on Computer Vision, 2019, pp. 1314-1324.

K. Han, Y. Wang, Q. Tian et al., “GhostNet: More Features from Cheap Operations,” in Proc. IEEE/CVF Conference on

Computer Vision and Pattern Recognition, 2020, pp. 1580-1589.

X. Zhou, D. Wang, and P. Krähenbühl. (2019, Apr. 16). Objects as Points. [Online]. Available: https://arxiv.org/

abs/1904.07850

M. Tan, R. Pang and Q. V. Le, “EfcientDet: Scalable and Efcient Object Detection,” in Proc. IEEE/CVF Conference

on Computer Vision and Pattern Recognition, 2020, pp. 10781-10790.

Z. Jiang, L. Zhao, S. Li et al., (2020, Nov. 9). Real-Time Object Detection Method Based on Improved YOLOv4-Tiny.

[Online]. Available: https://arxiv.org/abs/2011.04244