首页 | 本学科首页   官方微博 | 高级检索  
     

基于改进YOLOv8n的复杂环境下柑橘识别
引用本文:岳凯,张鹏超,王磊,郭芝淼,张家俊. 基于改进YOLOv8n的复杂环境下柑橘识别[J]. 农业工程学报, 2024, 40(8): 152-158
作者姓名:岳凯  张鹏超  王磊  郭芝淼  张家俊
作者单位:1. 陕西理工大学机械工程学院,汉中 723001;2. 陕西省工业自动化重点实验室,汉中 723001
基金项目:国家自然科学基金资助项目(No.62176146)
摘    要:针对复杂环境下柑橘果实大量重叠、枝叶遮挡且现有模型参数量大、计算复杂度高等问题,提出了一种基于改进YOLOv8n的柑橘识别模型YOLOv8-MEIN。首先,该研究设计了ME卷积模块并使用它改进YOLOv8n的C2f模块。其次,为了弥补CIoU损失函数在检测任务中泛化性弱和收敛速度慢的问题,使用Inner-CIoU损失函数加速边界框回归,提高模型检测性能。最后,在自建数据集上进行模型试验对比,试验结果表明,YOLOv8-MEIN模型交并比阈值为0.5的平均精度均值mAP0.5值为96.9%,召回率为91.7%,交并比阈值为0.5~0.95的平均精度均值mAP0.5~0.95值为85.8%,模型大小为5.8MB,参数量为2.87M。与原模型YOLOv8n相比,mAP0.5值、召回率、mAP0.5~0.95值分别提高了0.4、1.0、0.6个百分点,模型大小和参数量相比于原模型分别降低了3.3%和4.3%,为柑橘的自动化采摘提供技术参考。

关 键 词:图像识别  深度学习  目标检测  YOLov8n  Inner-IoU损失函数  复杂环境  柑橘
收稿时间:2024-01-15
修稿时间:2024-04-10

Recognizing citrus in complex environment using improved YOLOv8n
YUE Kai,ZHANG Pengchao,WANG Lei,GUO Zhimiao,ZHANG Jiajun. Recognizing citrus in complex environment using improved YOLOv8n[J]. Transactions of the Chinese Society of Agricultural Engineering, 2024, 40(8): 152-158
Authors:YUE Kai  ZHANG Pengchao  WANG Lei  GUO Zhimiao  ZHANG Jiajun
Affiliation:1. School of Mechanical Engineering, Shaanxi University of Science and Technology, Hanzhong 723001, China;2. Shaanxi Key Laboratory of Industrial Automation, Hanzhong 723001, China
Abstract:Automated harvesting holds the promising potential in the informatization and automation of smart agriculture. Manual harvesting can be replaced to improve the high efficiency and cost saving. However, the existing models are limited to the large number of overlapping citrus fruits, branch and leaf occlusion in the complex environments, particularly for the much more parameters and high computational complexity. In this study, an improved YOLOv8-MEIN model was proposed for the citrus recognition using YOLOv8n. Firstly, the more efficient (ME) convolution module was designed to improve the CSPDarknet53 into 2-Stage feature pyramid network (C2f) module of YOLOv8n. The high number of common convolutional parameters and computation were reduced to fully meet the requirements of the real-time detection, due to the conditions of mobile hardware performance. The overall computation was also reduced to decrease the number of parameters in the model. Secondly, the Inner- complete intersection over union (CIoU) loss function was used to accelerate the bounding box regression for the better performance of the model. The weak generalization and slow convergence of the CIoU loss function were also compensated in the detection task. The reason was that the CIoU loss function only considered the shape loss in original YOLOv8, thus degenerating to the IoU loss function, when the target and bounding box were at the same aspect ratio. Finally, the YOLOv8-MEIN model was validated for the citrus detection. The YOLOv5s, YOLOv7-tiny, Faster R-CNN, and YOLOv8n models were selected for the training and testing on the citrus dataset. A comparison was carried out on the self-built citrus dataset under the same number of iteration rounds, in order to verify the performance of the YOLOv8-MEIN model. The YOLOv8-MEIN was improved 5.85 percentage points in the recall, 14.3 percentage points in the mean average precision (mAP) mAP0.5 metrics, and 34.8 percentage points in mAP0.5~0.95 metrics, compared with the two-stage Faster R-CNN, where the model size was only 5.1% of Faster R-CNN. Compared with the lightweight networks of YOLO series, such as YOLOv5s, YOLOv7-tiny, and YOLOv8n, the number of parameters was reduced by 59.1%, 52.3%, and 4.4%, respectively, whereas, the mAP0.5~0.95 values were improved by 0.7, 4.0, and 0.6 percentage points, respectively, indicating the outstanding mAP0.5. The experimental results show that the YOLOv8-MEIN model was achieved in the higher detection accuracy and efficiency with the smallest model volume and the lower computational cost. More convenient model was found for the transplantation and application suitable for the real-time target detection of citrus. The size of model and the number of parameters were reduced by 3.3 and 4.3 percentage points, respectively, compared with the mainstream target detections. The improved model was fully met the requirements for the fast and accurate recognition of citrus fruits in the complex environments. The finding can provide the technical references for the automated harvesting of citrus fruits.
Keywords:image recognition  deep learning  object detection  YOLOv8n  Inner-IoU loss function  complex environments  citrus
点击此处可从《农业工程学报》浏览原始摘要信息
点击此处可从《农业工程学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号