首页 | 本学科首页   官方微博 | 高级检索  
     

基于改进YOLOv5m的采摘机器人苹果采摘方式实时识别
引用本文:闫彬,樊攀,王美茸,史帅旗,雷小燕,杨福增. 基于改进YOLOv5m的采摘机器人苹果采摘方式实时识别[J]. 农业机械学报, 2022, 53(9): 28-38,59
作者姓名:闫彬  樊攀  王美茸  史帅旗  雷小燕  杨福增
作者单位:西北农林科技大学机械与电子工程学院,陕西杨凌712100;农业农村部苹果全程机械化科研基地,陕西杨凌712100;西北农林科技大学机械与电子工程学院,陕西杨凌712100;农业农村部北方农业装备科学观测实验站,陕西杨凌712100;西北农林科技大学机械与电子工程学院,陕西杨凌712100;黄土高原土壤侵蚀与旱地农业国家重点实验室,陕西杨凌712100
基金项目:陕西省科技重大专项(2020zdzx03-04-01)
摘    要:为准确识别果树上的不同苹果目标,并区分不同枝干遮挡情形下的果实,从而为机械手主动调整位姿以避开枝干对苹果的遮挡进行果实采摘提供视觉引导,提出了一种基于改进YOLOv5m面向采摘机器人的苹果采摘方式实时识别方法。首先,改进设计了BottleneckCSP-B特征提取模块并替换原YOLOv5m骨干网络中的BottleneckCSP模块,实现了原模块对图像深层特征提取能力的增强与骨干网络的轻量化改进;然后,将SE模块嵌入到所改进设计的骨干网络中,以更好地提取不同苹果目标的特征;进而改进了原YOLOv5m架构中输入中等尺寸目标检测层的特征图的跨接融合方式,提升了果实的识别精度;最后,改进了网络的初始锚框尺寸,避免了对图像里较远种植行苹果的识别。结果表明,所提出的改进模型可实现对图像中可直接采摘、迂回采摘(苹果上、下、左、右侧采摘)和不可采摘果实的识别,识别召回率、准确率、mAP和F1值分别为85.9%、81.0%、80.7%和83.4%。单幅图像的平均识别时间为0.025s。对比了所提出的改进算法与原YOLOv5m、YOLOv3和EfficientDet-D0算法在测试集上对6类苹果采摘方式的识别效果,结果表明,所提出的算法比其他3种算法识别的mAP分别高出了5.4、22、20.6个百分点。改进模型的体积为原始YOLOv5m模型体积的89.59%。该方法可为机器人的采摘手主动避开枝干对果实的遮挡,以不同位姿采摘苹果提供技术支撑,可降低苹果的采摘损失。

关 键 词:苹果  采摘机器人  YOLOv5m  采摘方式识别  视觉引导  深度学习
收稿时间:2022-04-01

Real-time Apple Picking Pattern Recognition for Picking Robot Based on Improved YOLOv5m
YAN Bin,FAN Pan,WANG Meirong,SHI Shuaiqi,LEI Xiaoyan,YANG Fuzeng. Real-time Apple Picking Pattern Recognition for Picking Robot Based on Improved YOLOv5m[J]. Transactions of the Chinese Society for Agricultural Machinery, 2022, 53(9): 28-38,59
Authors:YAN Bin  FAN Pan  WANG Meirong  SHI Shuaiqi  LEI Xiaoyan  YANG Fuzeng
Affiliation:Northwest A&F University
Abstract:In order to accurately identify the different fruit targets on apple trees, and automatically distinguish the fruit occluded by different branches, providing visual guidance for the mechanical picking end-effector to actively adjust the pose of apple picking to avoid the shelter of the branches, a real-time recognition method of apple picking pattern based on improved YOLOv5m for picking robot was proposed. Firstly, BottleneckCSP module was designed and improved to BottleneckCSP-B module which was used to replace the BottleneckCSP module in backbone architecture of original YOLOv5m network. The ability of image deep feature extraction of the original BottleneckCSP module was enhanced, and the original YOLOv5m backbone network was lightweight designed and improved. Secondly, SE module was inserted to the proposed improved backbone network, to better extract the features of different apple targets. Thirdly, the bonding fusion mode of feature maps, which were input to the target detection layer of medium size in the original YOLOv5m network, were improved, and the recognition accuracy of apple was improved. Finally, the initial anchor box sizes of the original network were improved, avoiding the misrecognition of apples in farther plant row. The experimental results indicated that the graspable, circuitous-graspable (up-graspable, down-graspable, left-graspable, right-graspable) and ungraspable apples could be identified effectively by using the proposed improved model in the study. The recognition recall, precision, mAP and F1 were 85.9%, 81.0%, 80.7% and 83.4%, respectively. The average recognition time was 0.025s per image. Contrasted with original YOLOv5m, YOLOv3 and EfficientDet-D0 model, the mAP of the proposed improved YOLOv5m model was increased by 5.4 percentage points, 22 percentage points and 20.6 percentage points, respectively on test set. The size of the improved model was 89.59% of original YOLOv5m model. The proposed method can provide technical support for the picking end-effector of robot to pick apples in different poses avoiding the shelter of branches, to reduce the loss of apple picking.
Keywords:apple  picking robot  YOLOv5m  picking pattern recognition  visual guidance  deep learning
本文献已被 万方数据 等数据库收录!
点击此处可从《农业机械学报》浏览原始摘要信息
点击此处可从《农业机械学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号