首页 | 本学科首页   官方微博 | 高级检索  
     

农业车辆双目视觉障碍物感知系统设计与试验
引用本文:魏建胜, 潘树国, 田光兆, 高旺, 孙迎春. 农业车辆双目视觉障碍物感知系统设计与试验[J]. 农业工程学报, 2021, 37(9): 55-63. DOI: 10.11975/j.issn.1002-6819.2021.09.007
作者姓名:魏建胜  潘树国  田光兆  高旺  孙迎春
作者单位:1.东南大学仪器科学与工程学院,南京 210096;2.南京农业大学工学院,南京 210031
基金项目:国家自然科学基金项目(41774027,41904022)
摘    要:为保证智能化农业机械在自主导航过程中的安全可靠性,该研究将嵌入式AI计算机Jetson TX2作为运算核心,设计一套基于双目视觉的农业机械障碍物感知系统.使用深度卷积神经网络对作业环境中的障碍物进行识别,并提出一种基于改进YOLOv3的深度估计方法.将双目相机抓取的左右图像分别输入至改进的YOLOv3模型中进行障碍物检...

关 键 词:农业机械  图像处理  障碍物感知  深度估计  视差计算
收稿时间:2020-07-24
修稿时间:2021-03-31

Design and experiments of the binocular visual obstacle perception system for agricultural vehicles
Wei Jiansheng, Pan Shuguo, Tian Guangzhao, Gao Wang, Sun Yingchun. Design and experiments of the binocular visual obstacle perception system for agricultural vehicles[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2021, 37(9): 55-63. DOI: 10.11975/j.issn.1002-6819.2021.09.007
Authors:Wei Jiansheng  Pan Shuguo  Tian Guangzhao  Gao Wang  Sun Yingchun
Affiliation:1.School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China;2.College of Engineering , Nanjing Agricultural University, Nanjing 210031, China
Abstract:Machine learning was efficiently incorporated to design a visual perception system for obstacle-free path planning in agricultural vehicles. The present system aims to ensure the safety and reliability of intelligent agricultural vehicles in the process of autonomous navigation. Hardware and software were mainly included in the system. The hardware consisted of visual perception and navigation control module. Since the visual perception task needed real-time image processing, the embedded AI computer Jetson TX2 was taken especially as the core of computing to operate. A deep Convolutional Neural Network (CNN) was used to identify agricultural obstacles. The complex structure and uneven illumination were considered in the agricultural environment, thereby enhancing stability in object detection. The CNN performance of environmental features was much better, compared with the traditional detection using artificially designed features. Moreover, better detection was achieved under continuous learning features in the current task from the large-scale dataset. The improved YOLOv3 was utilized to integrate object detection for the simultaneous output of all information, including category, location, and depth estimation. A binocular camera was used to capture the left and right images, all of which were firstly input into the improved YOLOv3 model for object detection. The output of the improved YOLOv3 model was used for object matching to complete obstacle recognition, where the relationship of obstacles was determined in the left and right images. The location of matching objects was then used to calculate the parallax of the obstacle between left and right images. Finally, the parallax of the obstacle was input into the binocular imaging model for depth estimation. The accuracy of depth estimation was improved, with the increase of model sensitivity to the X-axis of images. The mean error, mean error ratio, and mean square error of depth estimation were greatly improved, compared with the original YOLOv3 and HOG+SVM model. The experimental results showed that the embedded AI computer-processed images in real-time, ensuring the detection accuracy of the improved YOLOv3 model. In object detection, a highly accurate identification was achieved in the agricultural obstacles with an average accuracy rate of 89.54%, and a recall rate of 90.18%. In the first kind of obstacle, the mean error and mean error ratio of the improved YOLOV3 model were 38.92% and 37.23% lower than those of the original one, while 53.44% and 53.14% lower than those of the HOG+SVM model, respectively. In the second kind of obstacle, the mean error and mean error ratio of the improved YOLOV3 model were 26.47% and 26.12% lower than those of the original one, while 41.9% and 41.73% lower than those of the HOG+SVM model, respectively. In the third kind of obstacle, the mean error and mean error ratio of the improved YOLOV3 model were 25.69% and 25.65% lower than those of the original one, while 43.14% and 43.01% lower than those of the HOG+SVM model, respectively. In addition, there was no obvious change in the mean error, mean error ratio, and mean square error of the three models, when changing the distance between obstacle and vehicle. The average error ratio was 4.66 in the depth estimation of obstacles under the dynamic scenario, and the average time was 0.573 s. An electrically controlled hydraulic steering was also used in time for obstacle avoidance during depth warning. The findings can provide an effective basis for environment perception for agricultural vehicles in autonomous navigation. In the following research, the more lightweight YOLOv3-tiny model and the terminal processor Xavier with higher computing power can be selected to conduct the depth estimation, aiming to increase the real-time inference speed of visual perception system in modern agriculture.
Keywords:agricultural machinery   image processing   obstacle perception   depth estimation   parallax calculation
本文献已被 CNKI 万方数据 等数据库收录!
点击此处可从《农业工程学报》浏览原始摘要信息
点击此处可从《农业工程学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号