首页 | 本学科首页   官方微博 | 高级检索  
     检索      

基于形状因子和分割点定位的粘连害虫图像分割方法
引用本文:李文勇,李明,钱建平,孙传恒,杜尚丰,陈梅香.基于形状因子和分割点定位的粘连害虫图像分割方法[J].农业工程学报,2015,31(5):175-180.
作者姓名:李文勇  李明  钱建平  孙传恒  杜尚丰  陈梅香
作者单位:1. 中国农业大学信息与电气工程学院,北京 100083; 2. 北京市农林科学院国家农业信息化工程技术研究中心,北京 100097;,2. 北京市农林科学院国家农业信息化工程技术研究中心,北京 100097;,2. 北京市农林科学院国家农业信息化工程技术研究中心,北京 100097;,2. 北京市农林科学院国家农业信息化工程技术研究中心,北京 100097;,1. 中国农业大学信息与电气工程学院,北京 100083;,2. 北京市农林科学院国家农业信息化工程技术研究中心,北京 100097;
基金项目:国家自然科学基金青年基金项目(31301238);北京市自然科学基金资助项目(4132027);北京市农林科学院国际合作基金(GJHZ2013-4);欧盟FP7项目(PIRSES-GA-2013-612659)。
摘    要:单个害虫的分割是进行害虫特征提取和识别的前提。针对害虫识别过程中出现的粘连等问题,提出了一种基于形状因子和分割点定位的害虫图像分割方法。该方法首先利用形状因子对图像中的每个区域进行粘连判定,然后对判定为粘连的区域进行逐层轮廓剥离和局部分割点的确定,接着根据局部分割点在原区域中搜索边界轮廓的两个分离点,最后连接局部分割点与分离点线段进行害虫分割。通过实验室人工随机散落桃蛀螟Conogethes punctiferalis(Guenée)和田间粘虫板诱捕梨小食心虫Grapholitha molesta(Busck)2种场景采集图像,验证算法的有效性,并与分水岭分割算法进行对比,采用分割率、分割错误率和分割有效性3项指标进行评价,结果表明:针对实验室环境下采集的2组桃蛀螟害虫图像,该文方法平均错误率为7%,约为分水岭分割方法的1/2,平均分割有效率为92.65%,比分水岭算法提高了5.7个百分点;在2组田间梨小食心虫图像分割中,该文方法平均错误率为2.24%,平均分割有效率为97.8%,分别比分水岭方法降低了4.29个百分点和提高了3.95个百分点,说明该文方法在分割准确性和有效性方面都可以获得更好的分割性能,应用于害虫多目标分割与自动识别系统中,可以有效地提高识别精度。

关 键 词:图像分割  分水岭  算法  轮廓剥离  形状因子  粘连害虫
收稿时间:2014/12/10 0:00:00
修稿时间:2015/2/10 0:00:00

Segmentation method for touching pest images based on shape factor and separation points location
Li Wenyong,Li Ming,Qian Jianping,Sun Chuanheng,Du Shangfeng and Chen Meixiang.Segmentation method for touching pest images based on shape factor and separation points location[J].Transactions of the Chinese Society of Agricultural Engineering,2015,31(5):175-180.
Authors:Li Wenyong  Li Ming  Qian Jianping  Sun Chuanheng  Du Shangfeng and Chen Meixiang
Institution:1. College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China; 2. National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China;,2. National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China;,2. National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China;,2. National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China;,1. College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China; and 2. National Engineering Research Center for Information Technology in Agriculture, Beijing 100097, China;
Abstract:Abstract: Image segmentation is the precondition of feature extraction and recognition. In order to improve segmentation accuracy of touching objects in pest identification and counting system, an image segmentation algorithm based on shape factor and separation point location was presented. In this method, a shape factor that was defined using area and perimeter of a region was used to be a parameter to justify whether the region was one touching region or not. In this paper, the threshold of shape factor was set to be 0.50. And then, if a region was a touching one, its contour was stripped layer by layer. In each contour, it was necessary to check whether a local segmentation point existed or not. There were two types of local segmentation points. The first type was a point that was found twice in one contour at the same time, whose traversal sequence number satisfied the determined threshold condition. The second type was one point that could be found in one contour with its four connected region points at the same time, and the difference between their traversal sequence numbers satisfied the same threshold condition. Once the local segmentation point was found, two separating points of this touching region were searched and located in its original contour. The search method was based on the shortest distance between the local segmentation and the background pixel points. At last, the segmentation lines were plotted between the local segmentation and the two separating points. In order to verify the validity of the proposed algorithm, three types of touching images, such as serial connection, loop connection and hybrid connection images were used. The results showed that the proposed method could locate the local segmentation points and separating points more accurately than the watershed method. In addition, the lab and field images were used to test reliability of the proposed method. In the lab experiment, 100 yellow peach moth (Conogethes punctiferalis(Guenée) ) were collected and divided into two independent groups with 50 individuals in each one. In the field experiment, two sticky trap images of the Oriental fruit moth (Grapholitha molesta (Busck) ) were used. In this paper, three criteria such as SR (segmentation rate), SERR (segmentation error rate), and SEFR (segmentation efficiency rate) were used to evaluate the segmentation results between the proposed method and the watershed method. The results showed that, in the lab experiment, the mean SR of watershed method was more than the proposed method, but the average segmentation error rate of the proposed segmentation method was 7%, which was reduced by 6 percentage points than the watershed method. The average segmentation efficiency rate of the proposed segmentation method was 92.65%, which was more than watershed method by 5.7 percentage points. In the field experiment, the average segmentation error rate of the proposed segmentation method was 2.24%, which was reduced by 4.29 percentage points than the one of the watershed method. The average segmentation efficiency rate of the proposed segmentation method was 97.8%, which was more than the one of watershed method by 3.95 percentage points.The series of data showed that the proposed segmentation algorithm located points accurately and its invalid segmentation rate was low. The presented segmentation method for touching pest image could improve the segmentation performance and had a remarkable significance for the feature extraction and pest identification.
Keywords:image segmentation  watersheds  algorithms  contour-stripped  shape factor  touching pest
本文献已被 CNKI 万方数据 等数据库收录!
点击此处可从《农业工程学报》浏览原始摘要信息
点击此处可从《农业工程学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号