首页 | 本学科首页   官方微博 | 高级检索  
     检索      

农业车辆双目视觉障碍物感知系统设计与试验
引用本文:魏建胜,潘树国,田光兆,高旺,孙迎春.农业车辆双目视觉障碍物感知系统设计与试验[J].农业工程学报,2021,37(9):55-63.
作者姓名:魏建胜  潘树国  田光兆  高旺  孙迎春
作者单位:1. 东南大学仪器科学与工程学院,南京 210096;;2. 南京农业大学工学院,南京 210031;
基金项目:国家自然科学基金项目(41774027,41904022)
摘    要:为保证智能化农业机械在自主导航过程中的安全可靠性,该研究将嵌入式AI计算机Jetson TX2作为运算核心,设计一套基于双目视觉的农业机械障碍物感知系统。使用深度卷积神经网络对作业环境中的障碍物进行识别,并提出一种基于改进YOLOv3的深度估计方法。将双目相机抓取的左右图像分别输入至改进的YOLOv3模型中进行障碍物检测,并将输出的目标检测框信息进行目标匹配和视差计算,完成对障碍物的识别、定位和深度估计。试验结果表明,该系统能够对障碍物进行准确识别,平均准确率和召回率分别达到89.54%和90.18%;改进YOLOv3模型的深度估计误差均值、误差比均值较原始YOLOv3模型分别降低25.69%、25.65%,比Hog+SVM方法分别降低41.9%、41.73%;动态场景下系统对障碍物深度估计的平均误差比为4.66%,平均耗时0.573s,系统在深度预警时能够及时开启电控液压转向模块进行安全避障。研究结果可为农业机械的自主导航提供有效的环境感知依据。

关 键 词:农业机械  图像处理  障碍物感知  深度估计  视差计算
收稿时间:2020/7/24 0:00:00
修稿时间:2021/3/31 0:00:00

Design and experiments of the binocular visual obstacle perception system for agricultural vehicles
Wei Jiansheng,Pan Shuguo,Tian Guangzhao,Gao Wang,Sun Yingchun.Design and experiments of the binocular visual obstacle perception system for agricultural vehicles[J].Transactions of the Chinese Society of Agricultural Engineering,2021,37(9):55-63.
Authors:Wei Jiansheng  Pan Shuguo  Tian Guangzhao  Gao Wang  Sun Yingchun
Institution:1. School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China;;2. College of Engineering , Nanjing Agricultural University, Nanjing 210096, China;
Abstract:Machine learning was efficiently incorporated to design a visual perception system for obstacle-free path planning in agricultural vehicles. The present system aims to ensure the safety and reliability of intelligent agricultural vehicles in the process of autonomous navigation. Hardware and software were mainly included in the system. The hardware consisted of visual perception and navigation control module. Since the visual perception task needed real-time image processing, the embedded AI computer Jetson TX2 was taken especially as the core of computing to operate. A deep Convolutional Neural Network (CNN) was used to identify agricultural obstacles. The complex structure and uneven illumination were considered in the agricultural environment, thereby enhancing stability in object detection. The CNN performance of environmental features was much better, compared with the traditional detection using artificially designed features. Moreover, better detection was achieved under continuous learning features in the current task from the large-scale dataset. The improved YOLOv3 was utilized to integrate object detection for the simultaneous output of all information, including category, location, and depth estimation. A binocular camera was used to capture the left and right images, all of which were firstly input into the improved YOLOv3 model for object detection. The output of the improved YOLOv3 model was used for object matching to complete obstacle recognition, where the relationship of obstacles was determined in the left and right images. The location of matching objects was then used to calculate the parallax of the obstacle between left and right images. Finally, the parallax of the obstacle was input into the binocular imaging model for depth estimation. The accuracy of depth estimation was improved, with the increase of model sensitivity to the X-axis of images. The mean error, mean error ratio, and mean square error of depth estimation were greatly improved, compared with the original YOLOv3 and HOG+SVM model. The experimental results showed that the embedded AI computer-processed images in real-time, ensuring the detection accuracy of the improved YOLOv3 model. In object detection, a highly accurate identification was achieved in the agricultural obstacles with an average accuracy rate of 89.54%, and a recall rate of 90.18%. In the first kind of obstacle, the mean error and mean error ratio of the improved YOLOV3 model were 38.92% and 37.23% lower than those of the original one, while 53.44% and 53.14% lower than those of the HOG+SVM model, respectively. In the second kind of obstacle, the mean error and mean error ratio of the improved YOLOV3 model were 26.47% and 26.12% lower than those of the original one, while 41.9% and 41.73% lower than those of the HOG+SVM model, respectively. In the third kind of obstacle, the mean error and mean error ratio of the improved YOLOV3 model were 25.69% and 25.65% lower than those of the original one, while 43.14% and 43.01% lower than those of the HOG+SVM model, respectively. In addition, there was no obvious change in the mean error, mean error ratio, and mean square error of the three models, when changing the distance between obstacle and vehicle. The average error ratio was 4.66 in the depth estimation of obstacles under the dynamic scenario, and the average time was 0.573 s. An electrically controlled hydraulic steering was also used in time for obstacle avoidance during depth warning. The findings can provide an effective basis for environment perception for agricultural vehicles in autonomous navigation. In the following research, the more lightweight YOLOv3-tiny model and the terminal processor Xavier with higher computing power can be selected to conduct the depth estimation, aiming to increase the real-time inference speed of visual perception system in modern agriculture.
Keywords:agricultural machinery  image processing  obstacle perception  depth estimation  parallax calculation
本文献已被 CNKI 等数据库收录!
点击此处可从《农业工程学报》浏览原始摘要信息
点击此处可从《农业工程学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号