首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 875 毫秒
1.
目的 针对传统奶牛养殖中采用人工识别奶牛个体的方法效率低且主观性强的问题,提出一种基于改进Mask R-CNN的奶牛个体识别方法。方法 该方法对Mask R-CNN中的特征提取网络结构进行优化,采用嵌入SE block的ResNet-50网络作为Backbone,通过加权策略对图像通道进行筛选以提高特征利用率;针对实例分割时目标边缘定位不准确的问题,引入IoU boundary loss构建新的Mask损失函数,以提高边界检测的精度;对3000张奶牛图像进行训练、验证和测试。结果 改进Mask R-CNN模型的精度均值(AP)达100%,IoUMask达91.34%;与原始Mask R-CNN模型相比,AP提高了3.28%,IoUMask提高了5.92%。结论 本文所提方法具备良好的目标检测能力,可为复杂农场环境下的奶牛个体精准识别提供参考。  相似文献   

2.
【目的】研究基于改进Mask R-CNN的玉米苗冠层分割算法,满足精准作业中对靶施肥的识别要求,提高化肥的使用效率,减少环境污染。【方法】采集田间玉米苗图片并增强数据,生成田间数据集;使用ResNeXt50/101-FPN作为特征提取网络对分割算法进行训练,并与原始ResNet50/101-FPN的训练精度结果作对比;采用不同光照强度及有伴生杂草的玉米苗图片对比验证冠层识别算法效果。【结果】在不同光照强度下,无伴生杂草的目标平均识别精度高于95.5%,分割精度达98.1%;在有伴生杂草与玉米苗有交叉重合情况下,目标平均识别精度高于94.7%,分割精度达97.9%。检测一帧图像的平均时间为0.11 s。【结论】Mask R-CNN的玉米苗及株芯检测算法有更高的准确率和分割精度,更能适应不同光照强度及有伴生杂草的苗草交叉重合情况的目标检测。  相似文献   

3.
为解决烟叶智能分级识别中需对多片散放烟叶同步进行部位识别的问题,提出一种基于改进Mask R–CNN的多片烟叶的部位同步识别方法:在Mask R–CNN区域建议网络中引入K–means聚类算法,对已标注目标检测框进行聚类,实现对预设的5种尺度的锚点尺寸和3种比例的锚点长宽比的优化,使其更加符合烟叶图像数据的分布特性,达到提高生成建议框的精确性、缩短识别时间的目的。基于采集的烟叶图像数据集,验证改进Mask R–CNN方法的有效性。结果表明,当IoU为0.5时,改进MaskR–CNN单样本耗时313ms,比MaskR–CNN的326ms快,在测试集上的均值平均精度(mAP)提高了3.56%。与FasterR–CNN和SSD目标检测算法相比,在准确率和召回率上也表现出优势。  相似文献   

4.
基于循环残差注意力的群养生猪实例分割   总被引:4,自引:0,他引:4  
目的 在群养环境下,实现生猪粘连、杂物遮挡等不同条件下生猪个体的高精度分割。方法 对真实养殖场景下的8栏日龄20~105 d共45头群养生猪进行研究,以移动相机拍摄图像为数据源,并执行改变亮度、加入高斯噪声等数据增强操作获取标注图片3 834张。探究基于2个骨干网络ResNet50、ResNet101与2个任务网络Mask R-CNN、Cascade mask R-CNN交叉结合的多种模型,并将循环残差注意力(RRA)思想引入2个任务网络模型中,在不显著增加计算量的前提下提升模型特征提取能力、提高分割精度。结果 选用Mask R-CNN-ResNet50比Cascade mask R-CNN-ResNet50在AP0.5、AP0.75、AP0.5-0.95和AP0.5-0.95-large指标上分别提升4.3%、3.5%、2.2%和2.2%;加入不同数量的RRA模块以探究其对各个任务模型预测性能影响,试验表明加入2个RRA模块后对各个任务模型的提升效果最为明显。结论 加入2个RRA模块的Mask R-CNN-ResNet50模型可以更精确、有效地对不同场景群养生猪进行分割,为后续生猪身份识别与行为分析提供模型支撑。  相似文献   

5.
针对多种树形果园环境下,由于树冠背景复杂导致的树冠分割、检测及树形识别困难的问题,本研究提出了1种改进Mask R-CNN的B-Mask R-CNN检测模型,实现自然复杂环境下的果树树冠检测与树形识别。该模型引入了IoU(Intersection over Union)平衡采样,提高了模型的训练效果;其次,在边界框损失中引入平衡L1损失,使得多分类损失与边界框损失更快地收敛;另外,在区域推荐网络中调整锚框比例适应数据集中的目标,提升了模型准确率。该研究搜集矮化密植形、小冠疏层形、自然开心形、自然圆头形以及Y形5种常见修剪树形制作数据集,应用5个检测模型进行对比试验。试验结果表明,B-Mask R-CNN检测模型平均检测精度达到98.7%,与Mask R-CNN、Faster R-CNN、U-Net以及K-means模型相比检测精度更高,对复杂背景下的树形识别具有更好的鲁棒性,能够为后续精准喷施中喷施模式和控制参数的分析及应用奠定基础。  相似文献   

6.
Online automated identification of farmland pests is an important auxiliary means of pest control.  In practical applications, the online insect identification system is often unable to locate and identify the target pest accurately due to factors such as small target size, high similarity between species and complex backgrounds.  To facilitate the identification of insect larvae, a two-stage segmentation method, MRUNet was proposed in this study.  Structurally, MRUNet borrows  the practice of object detection before semantic segmentation from Mask R-CNN and then uses an improved lightweight UNet to perform the semantic segmentation.  To reliably evaluate the segmentation results of the models, statistical methods were introduced to measure the stability of the performance of the models among samples in addition to the evaluation indicators commonly used for semantic segmentation.  The experimental results showed that this two-stage image segmentation strategy is effective in dealing with small targets in complex backgrounds.  Compared with existing state-of-the-art semantic segmentation methods, MRUNet shows better stability and detail processing ability under the same conditions.  This study provides a reliable reference for the automated identification of insect larvae.  相似文献   

7.
首先,采用自适应G-B色差法对初始图像计算,获得色差灰度图,使用迭代阈值分割法提取果实兴趣区;其次,对经形态学处理后的兴趣区图像进行Blob分析,计算每个Blob的离心率和像素面积,去除明显偏离果实形状特点的Blob;最后,应用改进圆形Hough变换算法检测潜在类圆形果实目标,最终采用融合方向梯度直方图特征和网格搜索优化支持向量机的判别模型进一步去除虚假果实目标,提升苹果目标的侦测精确度。试验结果显示,该方法对果园自然环境下幼小青苹果的侦测正确率为88.51%,漏报率和误报率分别为11.49%和4.84%,算法模型综合性能指标为90.29%,表明该方法对幼果期苹果目标具有较强的侦测能力和较好的鲁棒性,该结果为果实作业机器人幼果期的自动化果实侦测提供参考。  相似文献   

8.
《农业科学学报》2023,22(6):1671-1683
Maize tassel detection is essential for future agronomic management in maize planting and breeding, with application in yield estimation, growth monitoring, intelligent picking, and disease detection. However, detecting maize tassels in the field poses prominent challenges as they are often obscured by widespread occlusions and differ in size and morphological color at different growth stages. This study proposes the SEYOLOX-tiny Model that more accurately and robustly detects maize tassels in the field. Firstly, the data acquisition method ensures the balance between the image quality and image acquisition efficiency and obtains maize tassel images from different periods to enrich the dataset by unmanned aerial vehicle (UAV). Moreover, the robust detection network extends YOLOX by embedding an attention mechanism to realize the extraction of critical features and suppressing the noise caused by adverse factors (e.g., occlusions and overlaps), which could be more suitable and robust for operation in complex natural environments. Experimental results verify the research hypothesis and show a mean average precision (mAP@0.5) of 95.0%. The mAP@0.5, mAP@0.5–0.95, mAP@0.5–0.95 (area=small), and mAP@0.5–0.95 (area=medium) average values increased by 1.5, 1.8, 5.3, and 1.7%, respectively, compared to the original model. The proposed method can effectively meet the precision and robustness requirements of the vision system in maize tassel detection.  相似文献   

9.
基于YOLOV4模型的果园樱桃实时检测研究   总被引:1,自引:0,他引:1  
为解决在自然环境下对樱桃不同生长时期的状态监测受环境影响存在目标识别困难、检测准确率低的问题,提出了一种基于CSPDarknet53改进的卷积神经网络樱桃分类检测模型。经典YOLOV4所使用的特征提取网络层数较深,能够提取更高级的抽象特征,但是对目标局部感知能力较弱,通过在CSPDarknet53网络结构上融合CBAM注意力机制,增强了目标局部特征感知能力,进一步提升目标检测精度,其特征提取和目标检测能力优于原算法,调整特征提取网络的特征层输出,将第三层输出变为第二层输出以增加小目标语义信息的获取,利用k-means算法优化先验框尺寸以适应樱桃目标大小,并进行了消融实验分析。结果表明,改进的YOLOV4樱桃检测模型模型的平均精度达到了92.31%,F1分数达到了87.3%,优于Faster RCNN、YOLOV3和原来的YOLOV4算法,检测速度为40.23幅·s-1,适用于自然环境下的樱桃监测,为实现果园水果生长状态自动监测提供了理论和技术基础。  相似文献   

10.
目的 引入区域卷积神经网络Faster R-CNN算法并对其改进,以实现在田间真实环境下背景复杂且具有相似病斑特征的玉米病害的智能诊断。方法 在玉米田间和公开数据集网站获取具有复杂背景的9种常见病害图像1 150幅,人工标注后对原始图像进行离线数据增强扩充;对Faster R-CNN算法进行适应性改进,在卷积层加入批标准化处理层,引入中心代价函数构建混合代价函数,提高相似病斑的识别精度;采用随机梯度下降算法优化训练模型,分别选取4种预训练的卷积结构作为Faster R-CNN的特征提取网络进行训练,并测试得到最优特征提取网络,利用训练好的模型选取不同天气条件下的测试集进行对比,并将改进Faster R-CNN与未改进的Faster R-CNN和SSD算法进行对比试验。结果 在改进Faster R-CNN病害识别框架中,以VGG16卷积层结构作为特征提取网络具有更出色的性能,利用测试集图像检验模型,识别结果的平均精度为 0.971 8,平均召回率为0.971 9,F1为0.971 8,总体平均准确率可达97.23%;晴天的图像识别效果优于阴天的。改进Faster R-CNN算法与未改进的Faster R-CNN算法相比,平均精度高0.088 6,单张图像检测耗时减少0.139 s;与SSD算法相比,平均精度高0.0425,单张图像检测耗时减少0.018 s,表明在大田环境中具有复杂背景的玉米病害智能检测领域,改进Faster R-CNN算法综合性能优于未改进的Faster R-CNN算法和SSD算法。结论 将改进后的Faster R-CNN算法引入田间复杂条件下的玉米病害智能诊断是可行的,具有较高的准确率和较快的检测速度,能够避免传统人工识别的主观性,该方法为田间玉米病害的及时精准防控提供了依据。  相似文献   

11.
目的利用红外自动感应相机对野生动物进行图像监测是对野生动物保护管理的有效手段,为了解决野外复杂背景环境导致的野生动物监测图像自动识别准确率低的问题,提出一种基于感兴趣区域(ROI)与卷积神经网络(CNN)的野生动物物种自动识别方法。方法以红外自动感应相机在内蒙古赛罕乌拉国家自然保护区内拍摄的马鹿、斑羚、猞猁、狍和野猪这5种国家级陆生保护动物的图像为实验样本,采用基于回归算法的目标检测方法,对监测图像中野生动物区域进行检测并分割,生成ROI图像,减少复杂背景信息对物种识别的干扰;利用裁剪、仿射变换等方式对样本数据进行扩充;构建基于全局-局部的VGG16双通道网络模型对样本图像进行训练,最后接入分类器输出物种识别结果。同时,构建了基于VGG19的双通道网络模型对样本图像进行训练,并与本研究训练结果进行比较;另外,将样本图像分别输入本研究算法与VGG16、R-CNN、Fast R-CNN算法进行训练,对比不同算法下的识别效果。结果利用本研究模型对样本图像进行训练时,测试集的平均识别精度均值MAP达到0.912,相对于VGG19结构下的训练模型和VGG16、R-CNN、Fast R-CNN,得到了更高的MAP值。结论相比于其他算法,本研究提出的物种识别模型更适合于复杂背景下的野生动物监测图像的物种识别,可以得到更高的MAP值与更优的识别效果。   相似文献   

12.
A vision-based weed control robot for agricultural field application requires robust vegetation segmentation. The output of vegetation segmentation is the fundamental element in the subsequent process of weed and crop discrimination as well as weed control. There are two challenging issues for robust vegetation segmentation under agricultural field conditions: (1) to overcome strongly varying natural illumination; (2) to avoid the influence of shadows under direct sunlight conditions. A way to resolve the issue of varying natural illumination is to use high dynamic range (HDR) camera technology. HDR cameras, however, do not resolve the shadow issue. In many cases, shadows tend to be classified during the segmentation as part of the foreground, i.e., vegetation regions. This study proposes an algorithm for ground shadow detection and removal, which is based on color space conversion and a multilevel threshold, and assesses the advantage of using this algorithm in vegetation segmentation under natural illumination conditions in an agricultural field. Applying shadow removal improved the performance of vegetation segmentation with an average improvement of 20, 4.4, and 13.5% in precision, specificity and modified accuracy, respectively. The average processing time for vegetation segmentation with shadow removal was 0.46 s, which is acceptable for real-time application (<1 s required). The proposed ground shadow detection and removal method enhances the performance of vegetation segmentation under natural illumination conditions in the field and is feasible for real-time field applications.  相似文献   

13.
针对目前油菜虫害识别在背景、角度、姿态、光照等方面的鲁棒性问题,提出一种基于深度卷积神经网络的油菜虫害检测方法:首先在卷积神经网络和区域候选网络的基础上,构建油菜虫害检测模型,再在深度学习tensorflow框架上实现模型的检测,最后对比分析结果。油菜虫害检测模型利用VGG16网络提取油菜虫害图像的特征,区域候选网络生成油菜害虫的初步位置候选框,Fast R–CNN实现候选框的分类和定位。结果表明,该方法可实现对蚜虫、菜青虫(幼虫)、菜蝽、跳甲、猿叶甲5种油菜害虫的快速准确检测,平均准确率达94.12%,与RCNN、Fast R–CNN、多特征融合方法、颜色特征提取方法相比,准确率分别提高了28%、23%、12%、2%。  相似文献   

14.
为了解决水稻小病斑检测不准确的问题,提出一种基于改进YOLOv3的水稻叶部病害检测方法Rice–YOLOv3。首先,采用K–means++聚类算法,计算新的锚框尺寸,使锚框尺寸与数据集相匹配;其次,采用激活函数Mish替换YOLOv3主干网络中的Leaky Relu激活函数,利用该激活函数的平滑特性,提升网络的检测准确率,同时将CSPNet与DarkNet53中的残差模块相结合,在避免出现梯度信息重复的同时,增加神经网络的学习能力,提升检测精度和速率;最后,在FPN层分别引入注意力机制ECA和CBAM模块,解决特征层堆叠处的特征提取问题,提高对小病斑的检测能力。在训练过程中,采用COCO数据集预训练网络模型,得到预训练权重,改善训练效果。结果表明:在测试集下,Rice–YOLOv3检测水稻叶部3种病害的平均精度均值(mAP)达92.94%,其中,稻瘟病、褐斑病、白叶枯病的m AP值分别达93.34%、89.68%、95.80%,相较于YOLOv3,Rice–YOLOv3检测的m AP提高了6.05个百分点,速率提升了2.8帧/s,对稻瘟病和褐斑病的小病斑的检测能力明显增强,可以检测出原...  相似文献   

15.
小麦是重要的粮食作物之一,针对人工田间麦穗计数及产量预测效率低的问题,基于深度学习提出了一种高分辨率的小密集麦穗实时检测方法。对麦穗图像数据集进行图像分割、标注、增强预处理,基于Tensorflow搭建YOLOv4网络模型,调整改进后对其进行迁移学习;与YOLOv3、YOLOv4-tiny、Faster R-CNN训练模型进行对比,对改进模型的实用性与局限性进行分析;重点分析影响麦穗检测模型性能的关键因素。通过图像分割的方式,证明了通过改变图像分辨率确定麦穗所占图像最优像素比,可以提高前景与背景差异,对小密集麦穗有显著效果。通过对改进模型的测试,表明该模型检测精度高,鲁棒性强。不同分辨率、不同品种、不同时期的麦穗图像均类平均精度(mAP)为93.7%,单张图片的检测速度为52帧·s-1,满足了麦穗的高精度实时检测。该研究结果为田间麦穗计数以及产量预测提供技术支持。  相似文献   

16.
Automated harvesting requires accurate detection and recognition of the fruit within a tree canopy in real-time in uncontrolled environments. However, occlusion, variable illumination, variable appearance and texture make this task a complex challenge. Our research discusses the development of a machine vision system, capable of recognizing occluded green apples within a tree canopy. This involves the detection of “green” apples within scenes of “green leaves”, shadow patterns, branches and other objects found in natural tree canopies. The system uses both thermal infra-red and color image modalities in order to achieve improved performance. Maximization of mutual information is used to find the optimal registration parameters between images from the two modalities. We use two approaches for apple detection based on low and high-level visual features. High-level features are global attributes captured by image processing operations, while low-level features are strong responses to primitive parts-based filters (such as Haar wavelets). These features are then applied separately to color and thermal infra-red images to detect apples from the background. These two approaches are compared and it is shown that the low-level feature-based approach is superior (74% recognition accuracy) over the high-level visual feature approach (53.16% recognition accuracy). Finally, a voting scheme is used to improve the detection results, which drops the false alarms with little effect on the recognition rate. The resulting classifiers acting independently can partially recognize the on-tree apples, however, when combined the recognition accuracy is increased.  相似文献   

17.
目的 实现精确迅速的农作物病害检测,减少人工诊断成本,降低病害带来的农作物产量和品质影响。方法 根据对农作物病害和病斑特征的分析,提出一种基于卷积注意力机制改进的YOLOX-Nano智能检测与识别模型,该模型采用CSPDarkNet作为主干网络,将卷积注意力模块CBAM引入到YOLOX-Nano网络结构的特征金字塔(Feature pyramid network,FPN)中,并在训练中引入Mixup数据增强方式,同时将分类的损失函数由二分类交叉熵损失函数(Binary cross entropy loss,BCE Loss)替换为焦点损失函数Focal Loss、回归损失函数由GIOU Loss替换为本文设计的CenterIOU Loss函数,采用迁移学习策略训练改进的YOLOX-Nano模型,以此提升农作物病害检测的精度。结果 改进后的YOLOX-Nano模型仅有0.98×106的参数量,在移动端测试单张图片检测时间约为0.187 s,平均识别精度达到99.56%。实践结果表明,其能快速有效地检测与识别苹果、玉米、葡萄、草莓、马铃薯和番茄等农作物的常见病害,且达到了精度与速度的平衡。结论 改进后的模型不仅对农作物叶片病害识别具有较高的精度和较快的检测速度,参数量和计算量较少,还易于部署在手机等移动端设备。该模型实现了在田间复杂环境对多种农作物病害精准定位与识别,对于指导早期农作物病害的防治具有十分重要的现实意义。  相似文献   

18.
利用机器人采摘柑橘果实需要解决机械臂运动过程中对障碍物的感知与避障问题。根据枝干的特征对枝干进行分段标记,使用深度学习Mask R-CNN神经网络进行训练、识别,然后与Kinect v2相机得到枝干障碍物关键点的三维信息进行重建。应用快速扩展随机树(rapidly-exploring random trees,RRT)的改进算法进行机械臂的避障运动规划。搭建了仿真及控制平台,并在实验室环境下通过课题组自行研制的柑橘收获机器人进行了验证,结果表明,样机避障成功率为90.7%,平均规划时间为1.5 s。上述结果为进一步进行实际环境采摘奠定了基础。  相似文献   

19.
Chen  Shumian  Xiong  Juntao  Jiao  Jingmian  Xie  Zhiming  Huo  Zhaowei  Hu  Wenxin 《Precision Agriculture》2022,23(5):1515-1531

Citrus fruits do not ripen at the same time in natural environments and exhibit different maturity stages on trees, hence it is necessary to realize selective harvesting of citrus picking robots. The visual attention mechanism reveals a physiological phenomenon that human eyes usually focus on a region that is salient from its surround. The degree to which a region contrasts with its surround is called visual saliency. This study proposes a novel citrus fruit maturity method combining visual saliency and convolutional neural networks to identify three maturity levels of citrus fruits. The proposed method is divided into two stages: the detection of citrus fruits on trees and the detection of fruit maturity. In stage one, the object detection network YOLOv5 was used to identify the citrus fruits in the image. In stage two, a visual saliency detection algorithm was improved and generated saliency maps of the fruits; The information of RGB images and the saliency maps were combined to determine the fruit maturity class using 4-channel ResNet34 network. The comparison experiments were conducted around the proposed method and the common RGB-based machine learning and deep learning methods. The experimental results show that the proposed method yields an accuracy of 95.07%, which is higher than the best RGB-based CNN model, VGG16, and the best machine learning model, KNN, about 3.14% and 18.24%, respectively. The results prove the validity of the proposed fruit maturity detection method and that this work can provide technical support for intelligent visual detection of selective harvesting robots.

  相似文献   

20.
细胞自噬是真核生物在进化过程中高度保守的重要降解途径,通过将受损的蛋白或细胞器包裹到双层膜结构的自噬小泡后,进而转运至溶酶体(动物)或液泡(酵母和植物)中进行降解,最终完成细胞内容物的循环利用。随着自噬在动物和酵母中研究的不断深入,人们也越来越多地关注植物自噬,且相关研究正在从模式植物逐渐扩展到作物。为更好地了解自噬在作物产量、品质和抗逆性等方面的作用,本文综述了近年来作物自噬的研究进展,并对自噬在重要农艺性状形成过程中的调控机制进行了深入探讨,以期为进一步改良作物的农艺性状和提高农业生产效率等提供参考。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号