首页 | 本学科首页   官方微博 | 高级检索  
     检索      

结合Sentinel-2影像和特征优选模型提取大豆种植区
引用本文:张东彦,杨玉莹,黄林生,杨琦,梁栋,佘宝,洪琪,姜飞.结合Sentinel-2影像和特征优选模型提取大豆种植区[J].农业工程学报,2021,37(9):110-119.
作者姓名:张东彦  杨玉莹  黄林生  杨琦  梁栋  佘宝  洪琪  姜飞
作者单位:1. 安徽大学农业生态大数据分析与应用技术国家地方联合工程研究中心,合肥 230601;;1. 安徽大学农业生态大数据分析与应用技术国家地方联合工程研究中心,合肥 230601; 2. 安徽理工大学空间信息与测绘工程学院,淮南 232001;;3. 宿州学院信息工程学院,宿州 234000;
基金项目:国家重点研发计划(2019YFE0115200);农业生态大数据分析与应用技术国家地方联合工程研究中心开放课题(AE2018011);安徽省高校优秀青年人才支持计划项目(gxyq2020001);安徽省教育厅基金(KJ2019A0120)
摘    要:准确获取大豆的空间分布对于产量估计、灾害预警和农业政策调整具有重要意义,目前针对种植结构复杂地区所开展的大豆遥感识别研究鲜有报道。该研究以安徽省北部平原的典型大豆产区——龙山、青疃镇为研究区,基于Sentinel-2数据提出一种分层逐级提取策略的大豆识别方法。该方法首先构建决策树筛选规则,剔除研究区内非农田地物,获得田间植被的总体分布;然后生成19个候选特征因子,包括分辨率小于等于20 m的10个波段反射率以及9个植被指数。在典型地物类型样本的支持下,将ReliefF特征权重评估算法与随机森林(RandomForest,RF),BP神经网络(Back-Propagation Neural Network,BPNN)和支持向量机(Support Vector Machine, SVM)相结合,分别构建ReliefF-RF、ReliefF-BPNN、ReliefF-SVM三种组合模型筛选出对于大豆识别最有效的特征,并基于布设在研究区内6个样方(大小为1 km×1 km)的无人机影像提取得到的大豆分布来评估3种模型在大豆制图中的表现。结果表明,ReliefF-RF模型表现最佳,基于该模型筛选出7个优选特征因子,大豆制图的总体精度介于85.92%~91.91%,Kappa系数在0.72~0.81之间,各个样方的提取效果均优于其他两种模型。此外,基于优选特征达到的提取精度明显高于原始波段反射率,虽然略低于全部19个特征的结果,但是数据量降低了63.16%。该研究可以为农田景观破碎、种植结构复杂地区的大豆种植区提取相关研究提供有价值的参考和借鉴。

关 键 词:机器学习  模型  大豆  Sentinel-2  种植区提取  特征优选
收稿时间:2021/2/25 0:00:00
修稿时间:2021/4/30 0:00:00

Extraction of soybean planting areas combining Sentinel-2 images and optimized feature model
Zhang Dongyan,Yang Yuying,Huang Linsheng,Yang Qi,Liang Dong,She Bao,Hong Qi,Jiang Fei.Extraction of soybean planting areas combining Sentinel-2 images and optimized feature model[J].Transactions of the Chinese Society of Agricultural Engineering,2021,37(9):110-119.
Authors:Zhang Dongyan  Yang Yuying  Huang Linsheng  Yang Qi  Liang Dong  She Bao  Hong Qi  Jiang Fei
Institution:1. National Engineering Research Center for Agro-Ecological Big Data Analysis & Application, Anhui University, Hefei 230601, China;;1. National Engineering Research Center for Agro-Ecological Big Data Analysis & Application, Anhui University, Hefei 230601, China; 2. School of Geomatics, Anhui University of Science & Technology, Huainan 232001, China;; 3. School of Information Engineering, Suzhou University, Suzhou 234000, China;
Abstract:Accurate mapping of the soybean planting area is greatly significant to yield estimation, crop-damage warning, and structural adjustments in modern agriculture. But there are only a few reports on the remote sensing technology in soybean identification, particularly in view of the high frequency of cloud cover, diverse types of summer crops, and complex planting structure of fields. In this study, taking Longshan and Qingtuan towns situated in typical soybean producing areas in North Anhui plain as the study area, a hierarchical extraction was proposed to obtain the spatial distribution of soybean planting area in the 2019 growing season. Sentinel-2 images were acquired at the early pod-setting stage of soybean (August 18, 2019). A series of filtering rules for decision trees were first established to eliminate non-agricultural cover types, such as water, sparse trees, bare soil, and artificial objects (buildings, roads). As such, the overall distribution of field vegetation was obtained. The Sentinel-2 imaging was then utilized to generate 19 candidate features containing the reflectance of 10 spectral bands with a resolution of less than or equal to 20 m and 9 vegetation indices. Sentinel-2 images had participated in the soybean extraction. A ReliefF model was used to evaluate the significance of each candidate feature in typical ground-feature samples. Three models were established, including ReliefF-RF, ReliefF-BPNN, and ReliefF-SVM. The ReliefF model was combined with three machine learning, including random forest (RF), BP neural network (BPNN), and support vector machine (SVM). The most effective features were screened out for the soybean identification, thereby evaluating the performance of three models in soybean mapping. The UAV images covering six ground samples (each was 1 km×1 km in size) were selected to evaluate the extraction. Results showed that the best performance was achieved in the ReliefF-RF model with the Kappa coefficient ranging from 0.72~0.81, and the overall accuracy of 85.92% and 91.91%. The Kappa coefficient of the present model was higher than that of another two models in each ground sample, where 0.69~0.79 and 0.70~0.78 for Relief-BPNN and Relief-SVM, respectively. The ReliefF-RF was used to single out the near-infrared B8 (842 nm), red-edge normalized difference vegetation index (NDVIre2) that derived from B8 and B6, short-wave infrared B12 (2190 nm), red-edge position (REP), red-edge B6 (740 nm), green B3 (560 nm), and enhanced vegetation index (EVI). It indicated that these seven optimum features were more advantageous than other commonly-used spectral bands and remote-sensing vegetation indices in soybean identification, where the red edge-related variables were particularly highlighted. In addition, the mapping data derived from the optimum features significantly outperformed that generated from the 10 spectral bands. Since the performance of the optimum feature subset was slightly inferior to total 19 features, ReliefF-RF that contained only seven optimum features showed obvious advantages, in terms of time and computation cost, as well as data volume. Consequently, the optimum features were more targeted without any inference from the proportion of non-agricultural land cover types, due mainly to the hierarchical extraction focused only on the field vegetation. Better applicability and generalization were gained in theory. The findings can provide a valuable reference for the extraction of soybean areas under complex planting conditions.
Keywords:machine learning  models  soybean  Sentinel-2  planting area extraction  feature optimization
本文献已被 CNKI 等数据库收录!
点击此处可从《农业工程学报》浏览原始摘要信息
点击此处可从《农业工程学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号