首页 | 本学科首页   官方微博 | 高级检索  
     检索      

利用Kalman滤波和Hungarian算法的多目标奶牛嘴部跟踪及反刍监测
引用本文:毛燕茹,牛童,王鹏,宋怀波,何东健.利用Kalman滤波和Hungarian算法的多目标奶牛嘴部跟踪及反刍监测[J].农业工程学报,2021,37(19):192-201.
作者姓名:毛燕茹  牛童  王鹏  宋怀波  何东健
作者单位:1. 西北农林科技大学机械与电子工程学院,杨凌 712100; 2. 农业农村部农业物联网重点实验室,杨凌 712100; 3. 陕西省农业信息感知与智能服务重点实验室,杨凌 712100;;1. 西北农林科技大学机械与电子工程学院,杨凌 712100; 3. 陕西省农业信息感知与智能服务重点实验室,杨凌 712100;
基金项目:陕西省重点产业创新链(群)-农业领域项目资助(No.2019ZDLNY02-05);国家重点研发计划资助项目(2017YFD0701603);中央高校基本科研业务费专项资金资助(No.2452019027)
摘    要:针对奶牛养殖场复杂环境下多目标奶牛嘴部自动跟踪及反刍监测的困难,该研究提出了一种基于嘴部区域跟踪的多目标奶牛反刍行为智能监测方法。在YOLOv4模型识别奶牛嘴部上下颚区域的基础上,以Kalman滤波和Hungarian算法跟踪上颚区域,并对同一奶牛目标的上颚和下颚区域进行关联匹配获取嘴部咀嚼曲线,以此获取反刍相关信息,从而实现多目标奶牛个体的嘴部跟踪和反刍行为监测;为解决奶牛快速摆头运动和棚舍栏杆遮挡引发奶牛标号变化的问题,提出未匹配跟踪框保持及扩大的方法。采集并选择实际养殖场环境下的反刍奶牛视频66段,对其中58段视频采取分帧操作得到图像,制作YOLOv4模型数据集,以其余8段视频验证跟踪方法和反刍行为判定方法的有效性。试验结果表明,YOLOv4模型对奶牛嘴部上颚、下颚区域的识别准确率分别为93.92%和92.46%;改进的跟踪算法可实现复杂环境下多目标奶牛嘴部区域的稳定跟踪,且有效解决了栏杆遮挡、快速摆头运动造成的奶牛标号变化现象,上下颚匹配率平均为99.89%,跟踪速度平均为31.85帧/s;由反刍行为判定方法获取的咀嚼次数正确率的平均值为96.93%,反刍时长误差的平均值为1.48 s。该研究可为实际养殖中多目标奶牛反刍行为的智能监测和分析提供参考,也可供其他群体动物运动部位的跟踪和行为监测借鉴。

关 键 词:机器视觉  图像识别  奶牛  算法  反刍行为  嘴部区域  多目标跟踪  YOLOv4
收稿时间:2021/4/12 0:00:00
修稿时间:2021/6/18 0:00:00

Multi-target cow mouth tracking and rumination monitoring using Kalman filter and Hungarian algorithm
Mao Yanru,Niu Tong,Wang Peng,Song Huaibo,He Dongjian.Multi-target cow mouth tracking and rumination monitoring using Kalman filter and Hungarian algorithm[J].Transactions of the Chinese Society of Agricultural Engineering,2021,37(19):192-201.
Authors:Mao Yanru  Niu Tong  Wang Peng  Song Huaibo  He Dongjian
Institution:1. College of Mechanical and Electronic Engineering, Northwest A&F University, Yangling 712100, China; 2. Key Laboratory of Agricultural Internet of Things, Ministry of Agriculture and Rural Affairs, Yangling 712100, China; 3. Shaanxi Key Laboratory of Agricultural Information Perception and Intelligent Service, Yangling 712100, China;;1. College of Mechanical and Electronic Engineering, Northwest A&F University, Yangling 712100, China; 3. Shaanxi Key Laboratory of Agricultural Information Perception and Intelligent Service, Yangling 712100, China;
Abstract:Rumination enables cows to chew grass more completely for better digestion, thereby closely relating to the health, production, reproduction, and welfare of cows. To perceive ruminant behavior has widely been one of the most important steps in modern dairy farm management. However, the traditional monitoring ruminant behavior of cows depends mainly on human labor, time-consuming and laborious. In this study, feasible intelligent monitoring was proposed for multi-target automatic tracking mouth and ruminant behavior of cows in the complex environment of dairy farms using the Kalman filter and Hungarian algorithm. The upper and lower jaw regions of cow mouths were firstly recognized by the YOLOv4 model. Subsequently, the region of the upper jaw was tracked by the Kalman filter and Hungarian algorithm. The chewing curve of the mouth region was then obtained to match the upper and lower jaw regions of the same cow. Finally, the related rumination information was achieved to further realize the mouth tracking and ruminant behavior monitoring of multi-target cows. As such, the unmatched tracking boxes were remained and expanded to deal with the identity change of cows caused by rapid head swing or shed railing occlusion. 66 videos were collected for ruminant cows in the actual farm environment, where 58 videos were divided into frames to make the dataset for the YOLOv4 model, and the remaining 8 videos were used to verify the tracking and rumination behavior. Videos data was involved sunny, cloudy, and rainy days, in which the number of cows varied from 2 to 3. Lying or standing was included in the ruminant posture of cows. Additionally, there were some interference factors, such as the rapid head swing of ruminating cows, shed railing occlusion, and the movement of other cows. Two indexes were selected to evaluate the detection performance of the YOLOv4 model, including average precision and mean average precision. 6 400 images of the dataset were trained, and 800 images were tested. The results showed that the average precisions of the YOLOv4 model were 93.92% and 92.46% for the detection of the upper and lower jaw region, respectively. The mean average precision of YOLOv4 reached 93.19%, which was 1.04, 4.25, and 1.74 percentage points higher than that of YOLOv5, SSD, and Faster RCNN models, respectively. Four indexes were selected to verify the performance of tracking and rumination behavior under different environments, including the rate of identity switch, the rate of identity match, the detection rate of chewing times, and the tracking speed. It was found that the YOLOv4 model realized the stable multi-target tracking of mouth regions of cows in complex environments, while effectively alleviating the identity change of cows resulted from the rapid head swing and shed railing occlusion. The average rate of identity match was 99.89% for the upper and lower jaws, and the average tracking speed was 31.85 frames/s. The average detection rate of chewing times was 96.93% after the evaluation of rumination behavior, and the average error of rumination time was 1.48 s. This finding can provide a strong reference for the intelligent monitoring and ruminant behavior of multi-target cows (or moving parts of animals) in actual breeding.
Keywords:machine vision  image recognition  cows  algorithms  ruminant behavior  mouth region  multi-target tracking  YOLOv4
点击此处可从《农业工程学报》浏览原始摘要信息
点击此处可从《农业工程学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号