首页 | 本学科首页   官方微博 | 高级检索  
     

基于轻量和积网络及无人机遥感图像的大豆田杂草识别
引用本文:王生生,王顺,张航,温长吉. 基于轻量和积网络及无人机遥感图像的大豆田杂草识别[J]. 农业工程学报, 2019, 35(6): 81-89
作者姓名:王生生  王顺  张航  温长吉
作者单位:吉林大学计算机科学与技术学院;吉林大学软件学院;吉林农业大学信息技术学院
基金项目:吉林省科技发展计划项目(20190302117GX,20180101334JC, 20180101041JC);吉林省教育厅科研规划重点课题(2016186)
摘    要:为提高机器视觉在无人机等小型嵌入式设备中杂草识别的准确率,该文以大豆苗中常见禾本科杂草和阔叶型杂草为研究对象,针对传统和积网络在图像分类任务中模型参数多、训练时间长、含有较多冗余节点和子树的问题,该文改进传统和积网络的学习过程,提出一种以小批量数据作为输入的轻量和积网络。在结构学习中,当积节点作用域内的变量个数小于一定阈值时,合并积节点为多元叶节点,否则将积节点重组为和积混合结构,并对边缘节点进行裁剪,有效降低了模型的参数量和复杂度。在参数学习中,提出贝叶斯矩匹配更新网络参数,使得模型对小样本的学习效率更高。最后结合K均值聚类算法应用于无人机图像中的杂草识别。试验结果表明,利用该方法对无人机图像中大豆苗、禾本科杂草、阔叶型杂草以及土壤的平均识别准确率达99.5%,高于传统和积网络和传统AlexNet。并且模型平均参数量仅为传统和积网络的33%,内存需求最大时减少了549 M,训练时间最多减少了688.79 s。该研究可为轻量和积网络模型在无人机喷洒农药中的杂草识别提供参考。

关 键 词:无人机;遥感;识别;和积网络;结构学习;参数学习;杂草
收稿时间:2018-11-14
修稿时间:2019-02-11

Soybean field weed recognition based on light sum-product networks and UAV remote sensing images
Wang Shengsheng,Wang Shun,Zhang Hang and Wen Changji. Soybean field weed recognition based on light sum-product networks and UAV remote sensing images[J]. Transactions of the Chinese Society of Agricultural Engineering, 2019, 35(6): 81-89
Authors:Wang Shengsheng  Wang Shun  Zhang Hang  Wen Changji
Affiliation:1. College of Computer Science and Technology, Jilin University, Changchun 130012, China;,2. Software Institute, Jilin University, Changchun 130012, China;,2. Software Institute, Jilin University, Changchun 130012, China; and 3. College of Information and Technology, Jilin Agricultural University, Changchun 130118, China
Abstract:Abstract: In weed control, using unmanned aerial vehicle (UAV) to obtain images, spraying specific pesticides according to different weed communities is an effective means of prevention and control. Sum-product networks is suitable for small embedded devices such as UAV. But it has many parameters, long training time, and more redundant nodes and subtrees in the image classification task, so that the recognition accuracy is not high. In response to these problems, this paper improved the learning process of traditional sum-product networks and used a mini-batch learning method to construct a network model through one pass of data. Its lightweight structure required less hardware resources and was more suitable for small embedded devices such as drones. It had reference significance for the subsequent spraying of pesticides by drones. For the input image, the light sum-product networks weed recognition model first used K-means clustering as the low-level feature extractor to obtain the feature dictionary, then downsampled the extracted features, and took the sampling features into mini-batches of data as input to train the light sum-product networks. Each category corresponds to an independent network structure, and the high-level features were extracted by internal nodes in the network structure. The probability values of the corresponding categories were output by the root nodes to identify weeds. The network structure was updated by comparing the correlation coefficients between variables. Bayesian moment matching was used to update the network parameters. To simplify the structure, when a product node had only one child, it was removed from the network, and its child nodes were connected to its parent node. Similarly, if a sum node was the last node of another sum node, then the child node was deleted and all its child nodes were promoted one layer up. This effectively reduced redundant edge branches and made the model structure lighter. Using this method, the average classification accuracy of soybean seedlings, grass weeds, broadleaf weeds and soils in UAV images was 99.5%, and the average sensitivity was 99.6%. And the model parameter quantity was only 33% of the traditional sum-product networks. The parameter quantity would increase with the input of the data flow. The amount of parameters was still much smaller than traditional convolutional neural networks AlexNet when using the larger data sets to construct the light sum-product networks. It showed that the model was suitable for larger data sets. The memory usage was reduced by 549 M compared to the traditional sum-product networks and was reduced by 1 072 M compared to the convolutional neural networks. The maximum average training time was reduced by 688.79 s compared to the traditional sum-product networks, which was much less than the convolutional neural networks. The experimental results showed that using the light sum-product network as the weed recognition model, the model parameters were less, the memory requirements were lower, and the training time was shorter without loss of precision. The shortcoming was that the data acquired by the UAV image in the previous stage needed to be processed in multiple steps. The data set itself relied on manual classification. Some images had different categories of overlap, and the misclassification of such images would increase when the features were extracted. However, by adjusting the classification threshold, the overall classification can achieve the desired results. The research can provide a reference for the use of light sum-product networks in weed recognition of UAV spraying pesticides.
Keywords:unmanned aerial vehicle   remote sensing   recognition   sum-product networks   structure learning   parameter learning   weed
本文献已被 CNKI 等数据库收录!
点击此处可从《农业工程学报》浏览原始摘要信息
点击此处可从《农业工程学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号