首页 | 本学科首页   官方微博 | 高级检索  
     

基于深度全卷积神经网络的大田稻穗分割
引用本文:段凌凤,熊雄,刘谦,杨万能,黄成龙. 基于深度全卷积神经网络的大田稻穗分割[J]. 农业工程学报, 2018, 34(12): 202-209
作者姓名:段凌凤  熊雄  刘谦  杨万能  黄成龙
作者单位:华中农业大学工学院;华中科技大学武汉光电国家研究中心;华中农业大学作物遗传改良国家重点实验室
基金项目:国家自然科学基金资助项目(31701317;31600287);湖北省自然科学基金资助项目(2017CFB208);国家重点研发计划课题子课题(2016YFD0100101-18)
摘    要:稻穗的准确分割是获取水稻穗部性状、实现水稻表型自动化测量的关键。该研究应用水稻图像数据集及数据增广技术,离线训练了用于稻穗分割的3个分别基于Seg Net,Deep LAB和PSPNet的全卷积神经网络。综合考虑分割性能和计算速度,优选了基于Seg Net的网络,称为Panicle Net。在线分割阶段先将原始图像划分为子图,由Panicle Net分割子图,再拼接子图得到分割结果。比较该算法及现有作物果穗分割算法Panicle-SEG、HSeg、i2滞后阈值法及joint Seg,该算法对与训练样本同年度拍摄样本Qseg值0.76、F值0.86,不同年度样本Qseg值0.67、F值0.80,远优于次优的Panicle-SEG算法,且计算速度约为Panicle-SEG算法的35倍。该算法能克服稻穗边缘严重不规则、不同品种及生育期稻穗外观差异大、穂叶颜色混叠和复杂大田环境中光照、遮挡等因素的干扰,提升稻穗分割准确度及效率,进而服务于水稻育种栽培。

关 键 词:作物;图像分割;大田水稻;稻穗分割;深度学习;全卷积神经网络
收稿时间:2018-01-03
修稿时间:2018-05-18

Field rice panicle segmentation based on deep full convolutional neural network
Duan Lingfeng,Xiong Xiong,Liu Qian,Yang Wanneng and Huang Chenglong. Field rice panicle segmentation based on deep full convolutional neural network[J]. Transactions of the Chinese Society of Agricultural Engineering, 2018, 34(12): 202-209
Authors:Duan Lingfeng  Xiong Xiong  Liu Qian  Yang Wanneng  Huang Chenglong
Abstract:Abstract: Panicle traits are significant in rice yield measurement, disease detection, nutrition diagnosis and growth period assessment. Precise segmentation of panicles from the rice images is a prerequisite for panicle trait measurement and rice phenotyping. However, pancile segmentation for various accessions and growth periods in the complex field environment is a big challenge. In this paper, we proposed a robust and fast image segmentation method for rice panicles in the field based on deep full convolutional neural network. The deep full convolutional neural network for panicle segmentation, namely, PanicleNet, was based on SegNet. The architecture of PanicleNet is similar with SegNet, except that the output number of PanicleNet''s last convolutional layer is set as 2, corresponding to 2 types of classification (panicle pixel/ non-panicle pixel). The overall segmentation mainly consists of 2 steps: Offline training step and on-line segmentation step. At the offline training step, 50 original field rice images with a resolution of 1971×1815 pixels from different accessions and growth stages were used to train the PanicleNet. Each original image was firstly divided into 24 sub-images with a resolution of 360×480 pixels. Data were augmented by adjusting the illumination component of the sub-images to simulate the changing illumination in the field. In total, 2880 images were used for training set and 720 images for validation set. The training of the network was accomplished by Caffe. The stochastic gradient descent (SGD) with momentum algorithm was used to train the PanicleNet. The momentum was set as 0.9 and the learning rate was set as 0.001. The network was initialized by VGGNet and then fine-tuned using our data. The batch size of training set and validation set was set as 4 and 2, respectively. The network was validated every 720 epochs. The training was stopped when the error converged. Finally, the network was trained for 72000 epochs. The final training model (PanicleNet) was saved to disk, which can be called by using the Caffe C++ interface in the on-line segmentation process. At the on-line segmentation step, an original image was firstly divided into several sub-images with a resolution of 360×480 pixels. The number of sub-images depends on the size of the original image. Each sub-image was then subjected to the PanicleNet for pixel-wise classification. The segmentation images for sub-images were then jointed to get the segmentation image for the original image. Tests showed that PanicleNet was capable of dealing with the complex and irregular panicle border, cluttered background, color overlapping between panicles and leaves, and large variation in panicle color, shape, size, texture caused by difference in rice accessions, growth periods, illumination unbalance and variation. Two image datasets, namely, Dataset2016 and Dataset2017, were used to evaluate the performance of the segmentation method. The 23 images in Dataset2016 were the remaining ones in the 73 images in 2016, 50 images of which were used for training the PanicleNet. And the 22 images in Dataset2017 were taken in 2017. The rice plants in different images belonged to different varieties. In total, 50 varieties were involved in the training phase and another 45 varieties were involved in the testing phase. The growth periods of the rice plants were also varied from heading stage to mature stage. The performance of PanicleNet was compared with other 4 approaches: Panicle-SEG, HSeg, i2 hysteresis thresholding, and jointSeg. PanicleNet outperformed the other 4 approaches. For the Dataset2016, the average Qseg value and F-measure value for PanicleNet were 0.76 and 0.86, respectively. For the Dataset2017, the average Qseg value and F-measure value for PanicleNet were 0.67 and 0.80, respectively. The processing efficiency of PanicleNet was approximately 2 s for segmenting an image with a resolution of 1971×1815 pixels. The proposed method promotes the accuracy and efficiency of panicle segmentation, which provides a novel tool for rice phenotyping, rice breeding and cultivation.
Keywords:crops   image segmentation   field rice   panicle segmentation   deep learning   full convolutional neural network
本文献已被 CNKI 等数据库收录!
点击此处可从《农业工程学报》浏览原始摘要信息
点击此处可从《农业工程学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号