首页 | 本学科首页   官方微博 | 高级检索  
     检索      

基于RGB-D信息融合和目标检测的番茄串采摘点识别定位方法
引用本文:张勤,陈建敏,李彬,徐灿.基于RGB-D信息融合和目标检测的番茄串采摘点识别定位方法[J].农业工程学报,2021,37(18):143-152.
作者姓名:张勤  陈建敏  李彬  徐灿
作者单位:1. 华南理工大学机械与汽车工程学院,广州 510641;2. 华南理工大学自动化科学与工程学院,广州 510641;;3. 广东省现代农业装备研究所,广州 510630
基金项目:广东省重点领域研发计划资助(2019B020222002);2019年广东省乡村振兴战略专项(粤财农[2019]73号);广东省现代农业产业共性关键技术研发创新团队建设项目(2019KJ129)
摘    要:采摘点的识别与定位是智能采摘的关键技术,也是实现高效、适时、无损采摘的重要保证。针对复杂背景下番茄串采摘点识别定位问题,提出基于RGB-D信息融合和目标检测的番茄串采摘点识别定位方法。通过YOLOv4目标检测算法和番茄串与对应果梗的连通关系,快速识别番茄串和可采摘果梗的感兴趣区域(Region of Interest,ROI);融合RGB-D图像中的深度信息和颜色特征识别采摘点,通过深度分割算法、形态学操作、K-means聚类算法和细化算法提取果梗图像,得到采摘点的图像坐标;匹配果梗深度图和彩色图信息,得到采摘点在相机坐标系下的精确坐标;引导机器人完成采摘任务。研究和大量现场试验结果表明,该方法可在复杂近色背景下,实现番茄串采摘点识别定位,单帧图像平均识别时间为54 ms,采摘点识别成功率为93.83%,采摘点深度误差±3 mm,满足自动采摘实时性要求。

关 键 词:图像识别  对象识别  提取  番茄串  RGB-D图像  信息融合  目标检测  采摘点
收稿时间:2021/3/12 0:00:00
修稿时间:2021/5/26 0:00:00

Method for recognizing and locating tomato cluster picking points based on RGB-D information fusion and target detection
Zhang Qin,Chen Jianmin,Li Bin,Xu Can.Method for recognizing and locating tomato cluster picking points based on RGB-D information fusion and target detection[J].Transactions of the Chinese Society of Agricultural Engineering,2021,37(18):143-152.
Authors:Zhang Qin  Chen Jianmin  Li Bin  Xu Can
Institution:1. School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou 510641, China;2. School of Automation Science and Engineer, South China University of Technology, Guangzhou 510641, China;; 3. Guangdong Institute of Modern Agricultural Equipment, Guangzhou, 510630, China
Abstract:Abstract: Spatial position and coordinate points (called picking points) can widely be visualized in intelligent robots for fruit picking in mechanized modern agriculture. Recognition and location of picking points have also been the key technologies to guarantee the efficient, timely and lossless picking during fruit harvesting. A tomato cluster can be both mature and immature tomato fruits, particularly in various shapes. Meanwhile, the color of fruit stem is similar to that of branches and leaves, while, the shape of fruit stems and petioles are similar. As such, there are large depth value errors or even lack of depth values that captured by the economical RGB-D depth camera using active stereo technology. Therefore, it is very difficult for picking robots to identify the picking points of tomato clusters in complex planting environment. In this study, a recognition and location algorithm was proposed for the picking points of tomato clusters using RGB-D information fusion and target detection. Firstly, the Region of Interest (ROIs) of tomato clusters and stems were collected via the YOLOv4 target detection, in order to efficiently locate picking targets. Then, the ROIs of pickable stems that connected to the ripe tomato cluster were determined by screening, according to the neighbor relationship between the tomato clusters and stems. Secondly, the comprehensive segmentation was selected using RGB-D information fusion, thereby to accurately recognize the picking points of stems against the ROI color background. Specifically, the tomato clusters from the nearest row were regarded as the foreground in the RGB-D image, while the rest were assumed as the background (i.e., noise), due mainly to only that the nearest row for picking in robots. After that, the depth information segmentation and morphological operations were combined to remove the noise in the pickable stem ROI of RGB images. Subsequently, the pickable stem edges were extracted from the stem ROI using K-means clustering, together with morphological operation and RGB color features. The center point of skeleton along the X axis was set as the picking point (x, y) in image coordinate system, especially after extracting the skeleton of stem via the thinning operation. Thirdly, the RGB image and depth map of pickable stem ROI were fused to locate the picking point. Specifically, the average depth of pickable stem was calculated using the depth information of the whole pickable stem without the noise under the mean filter. Correspondingly, an accurate depth value of picking point was obtained to compare the average with the original. Finally, the picking point was converted to the robot coordinate system from the image one. Eventually, the harvesting robot implemented the picking action, according the coordinates of picking point. A field test was also conducted to verify, where the average runtime of one image was 54 ms, while the picture resolution was 1 280×720, the recognition rate of picking points was 93.83%, and the depth value error of picking point was ±3 mm. Thus, the proposed algorithm can fully meet the practical requirements during field operation in harvesting robots.
Keywords:image recognition  object recognition  extraction  tomato cluster  RGB-D image  information fusion  target detection  picking point
点击此处可从《农业工程学报》浏览原始摘要信息
点击此处可从《农业工程学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号