首页 | 本学科首页   官方微博 | 高级检索  
     

改进YOLOv3的复杂环境下红花丝检测方法
引用本文:张振国, 邢振宇, 赵敏义, 杨双平, 郭全峰, 史瑞猛, 曾超. 改进YOLOv3的复杂环境下红花丝检测方法[J]. 农业工程学报, 2023, 39(3): 162-170. DOI: 10.11975/j.issn.1002-6819.202211204
作者姓名:张振国  邢振宇  赵敏义  杨双平  郭全峰  史瑞猛  曾超
作者单位:1.新疆农业大学机电工程学院,乌鲁木齐 830052;2.中国农业大学工学院,北京 100083;3.浙江省农业智能装备与机器人重点实验室,杭州 310058
基金项目:国家自然科学基金项目(52265041、31901417);浙江省农业智能装备与机器人重点实验室开放课题(2022ZJZD2202);天山创新团队项目(2021D14010);自治区科学基金青年基金项目(2019D01B12);自治区研究生科研创新项目(XJ2022G143)
摘    要:天气变化、光照变化、枝叶遮挡等复杂环境给红花丝的快速、准确检测带来挑战,影响红花采摘机器人的作业效率,该研究基于改进YOLOv3提出了一种目标检测算法(GSC-YOLOv3)。首先GSC-YOLOv3采用轻量级网络幻影结构GhostNet替换主干特征提取网络,并在保证良好检测精度的前提下,最大限度压缩算法参数,提高算法速度,从而使用少量参数生成红花丝有效特征;其次使用空间金字塔池化结构(spatial pyramid pooling,SPP)实现特征增强,弥补提取红花丝特征过程中的信息损失;最后将卷积块注意力模块(convolutional block attention module,CBAM)融入特征金字塔结构,以解决特征融合过程中的干扰问题,提高算法的检测效率和精度。检测结果表明:GSC-YOLOv3算法在测试集下的平均精度均值达到91.89%,比Faster R-CNN、YOLOv3、YOLOv4、YOLOv5、YOLOv6、YOLOv7算法分别高12.76、2.89、6.35、3.96、1.87、0.61个百分点;在GPU下的平均检测速度达到51.1 帧/s,均比其他6种算法高。在复杂场景下的对比试验结果表明,所改进算法具有高检测精度及良好的鲁棒性和实时性,对解决红花采摘机器人在复杂环境下红花丝的精准检测具有参考价值。

关 键 词:采摘  目标检测  YOLOv3  GhostNet结构  复杂环境  红花丝
收稿时间:2022-11-24
修稿时间:2023-01-09

Detecting safflower filaments using an improved YOLOv3 under complex environments
ZHANG Zhenguo, XING Zhenyu, ZHAO Minyi, YANG Shuangping, GUO Quanfeng, SHI Ruimeng, ZENG Chao. Detecting safflower filaments using an improved YOLOv3 under complex environments[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2023, 39(3): 162-170. DOI: 10.11975/j.issn.1002-6819.202211204
Authors:ZHANG Zhenguo  XING Zhenyu  ZHAO Minyi  YANG Shuangping  GUO Quanfeng  SHI Ruimeng  ZENG Chao
Affiliation:1.College of Mechanical and Electrical Engineering, Xinjiang Agricultural University, Urumqi 830052, China;2.College of Engineering, China Agricultural University, Beijing 100083, China;3.Key Laboratory of Intelligent Equipment and Robotics for Agriculture of Zhejiang Province, Hangzhou 310058, China
Abstract:Safflower is an annual herbaceous flowering plant in the world. The safflowers are often knotted with the multiple small, numerous, compact and dense filaments. The small target filaments of safflower have also posed a great challenge on the robotic picking in recent years, particularly with a few pixels in the whole recognition image. Effective feature information cannot be extracted from the targeted images, due to the small scale. Furthermore, it is highly susceptible to interference from the complex environments, such as weather, light, branch, and leaf occlusion. The safflower filaments can be difficult to detect, even in the case of errors and missed detections. In this study, the detection system was proposed to quickly and accurately identify the safflower filaments under the complex environments using an improved YOLOv3 (GSC-YOLOv3). Firstly, the redundant information was fully considered to determine the influencing factors on the detection accuracy during feature extraction of safflower filaments. The lightweight network GhostNet structure was utilized in the GSC-YOLOv3, and then the operation of a few Cheap Operations was used to generate the feature maps of safflower redundancy. After that, the lightweight network GhostNet of GSC-YOLOv3 was used to replace the backbone network of feature extraction, including the redundant information under the premise of better detection accuracy. Meanwhile, some parameters were compressed to maximize the speed of the model for the better effective features of safflower filaments using a small number of parameters. Secondly, Spatial Pyramid Pooling (SPP) structure was selected in the GSC-YOLOv3 at the end of effective feature extraction. In contrast to the YOLO series, the feature enhancement was achieved for the information loss in the process of extracting features. The foundation was also established for the subsequent feature pyramid structures to focus more on the targets during detection. Finally, the Convolutional Block Attention Module (CBAM) was incorporated into the feature pyramid structures. The CBAM convolutional attention mechanism module was also added after the extraction of three effective features, in order to reuse the useful channel and spatial information in the three times. Thus, the interference was efficiently avoided in the feature fusion process for the higher detection speed and accuracy of the improved model. The field test was conducted with the safflower filaments as an example. The test results show that the mean average precision value of the GSC-YOLOv3 reached 91.89% under the test set in the performance and confidence tests. Specifically, the GSC-YOLOv3 increased by 12.76, 2.89, 6.35, 3.96, 1.87, and 0.61 percentage points, respectively, compared with the Faster R-CNN, YOLOv3, YOLOv4, YOLOv5, YOLOv6, and YOLOv7. Then, the average detection speed under GPU reached 51.1 frames/s, which was higher than the rest. Meanwhile, the confidence level of the target safflower filaments was mostly above 0.9, indicating the smaller number of errored and missed safflower filaments. Moreover, an ablation test was used to verify the effectiveness of the improved model. The experimental results showed that the network structure and training strategy of the safflower filaments detection was significantly improved to fuse the GhostNet, SPP structure, and the CBAM module. Therefore, a series of comparative experiments were also conducted to verify the adaptability and effectiveness of the GSC-YOLOv3 in complex scenes with different weather, lighting conditions, and shading levels. The GSC-YOLOv3 also presented the high detection accuracy and robustness, as well as the real-time performance. The finding can provide a strong reference to accurately detect the safflower filaments in the complex environments.
Keywords:harvesting   object detection   YOLOv3   GhostNet structure   complex environments   safflower filaments
点击此处可从《农业工程学报》浏览原始摘要信息
点击此处可从《农业工程学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号