首页 | 本学科首页   官方微博 | 高级检索  
     

基于改进YOLOv8n的轻量化红花识别方法
引用本文:张新月,胡广锐,李浦航,曹晓明,张浩,陈军,杨亮亮. 基于改进YOLOv8n的轻量化红花识别方法[J]. 农业工程学报, 2024, 40(13): 163-170
作者姓名:张新月  胡广锐  李浦航  曹晓明  张浩  陈军  杨亮亮
作者单位:西北农林科技大学机械与电子工程学院,杨凌 712100;北见工业大学工学部,Hokkaido 090-8507
基金项目:国家自然科学基金项目(No. 32272001);宁夏回族自治区揭榜挂帅项目(No. 2022BBF01002)
摘    要:为解决智能化采收中红花识别性能易受田间复杂环境、设备计算资源等限制的问题,该研究提出一种基于改进YOLOv8n的轻量化红花识别方法,以便将模型部署在移动端上进行目标检测。该研究应用Vanillanet轻量化网络结构代替YOLOv8n的骨干特征提取网络,降低了模型的复杂程度;将大型可分离核注意力模块(large separable kernel attention, LSKA)引入特征融合网络,以降低存储量和计算资源消耗;将YOLOv8n的损失函数从中心点与边界框的重叠联合(center intersection of union, CIoU)替换为动态非单调的聚焦机制(wise intersection of union, WIoU)提升检测器的总体性能;并选用随机梯度下降算法(stochastic gradient descent, SGD)进行模型训练,以提高模型鲁棒性。试验结果表明,改进后的轻量化模型每秒传输帧数(frames per second, FPS)为123.46帧/s,与原YOLOv8n模型相比提高了7.41%,而模型大小为3.00MB,仅为原来的50.17%,并且精确度(precision, P)和平均精度值(mean average precision, mAP)达到了93.10%和96.40%,与YOLOv5s与YOLOv7-tiny检测模型相比,FPS分别提高了25.93%和19.76%,模型大小为原模型的21.90%和25.86%,研究结果为后续红花的智能化采收装备研发提供技术支持。

关 键 词:图像识别  模型  目标检测  YOLOv8n  Vanillanet  轻量化  红花采摘
收稿时间:2024-03-05
修稿时间:2024-05-26

Recognizing safflower using improved lightweight YOLOv8n
ZHANG Xinyue,HU Guangrui,LI Puhang,CAO Xiaoming,ZHANG Hao,CHEN Jun,YANG Liangliang. Recognizing safflower using improved lightweight YOLOv8n[J]. Transactions of the Chinese Society of Agricultural Engineering, 2024, 40(13): 163-170
Authors:ZHANG Xinyue  HU Guangrui  LI Puhang  CAO Xiaoming  ZHANG Hao  CHEN Jun  YANG Liangliang
Affiliation:College of Mechanical and Electronic Engineering, Northwest A & F University, Yangling 712100, China; Faculty of engineering, Kitami Institute of Technology, Hokkaido 090-8507, Japan
Abstract:Safflower is one of the most important cash crops in China. Its production area is concentrated in Xinjiang, Gansu and Ningxia. However, the harvesting of safflower relies mainly on manual labour at present. Particularly, the operating environment is easily affected by weather factors. Fortunately, intelligent harvesting can be expected to improve the efficiency of safflower harvesting with labour cost savings. Previous research often focused on pneumatic, pulling, combing, and cutting harvesting. However, it is still required for manual work during harvesting. The autonomous operation can be realized by combining target detection and navigation in the harvesting robots. However, the complex working environment in the field has limited the accurate recognition and localization in the harvesting process. This study aims to promote the performance of safflower recognition under the complex environment in the field during intelligent harvesting. A lightweight safflower recognition was also proposed using an improved YOLOv8n. The computational resources of the device were then deployed to the model on the mobile for detection. A dataset of 2309 images was created to categorize into two classes: picked and no picked. The safflower blooming was categorized into four stages, namely the bud, first flowering, prime bloom, and septum stage. The prime bloom stage was the best picking time in the most economically beneficial period of safflower. Therefore, the safflower only in the prime bloom stage was picked rather than the bud, first flowering, and septum stage. The improvement procedures were as follows. Firstly, the Vanillanet lightweight network structure was applied to substitute for the Backbone of YOLOv8n, in order to reduce the complex structure of the model. Secondly, the large separable kernel attention (LSKA) module was introduced into the Neck, in order to reduce the amount of storage and computational resource consumption. Thirdly, The YOLOv8n''s loss function was revised from the center intersection of union (CIoU) to the wise intersection of union (WIoU), in order to improve the overall performance of the detector. Finally, the stochastic gradient descent (SGD) was chosen to train the model for robustness. The experimental results showed that the frames per second (FPS) of the improved lightweight model increased by 7.41%, while the weight file was only 50.17% of the original one. The precision (P) and the mean average precision (mAP) values reached 93.10% and 96.40%, respectively. Furthermore, the FPS was improved by 25.93% and 19.76%, the weight file was reduced by 21.90% and 25.86%, respectively, compared with the YOLOv5s and YOLOv7-tiny models. Meanwhile, better robustness was achieved in the improved model. The Jetson Orin NX flatform was then selected to deploy for testing. The single-image detection time of YOLOv8n and YOLOv8n-VLWS was 0.38s and 0.27s, which was 28.95% shorter than the original model. The high precision and lightweight of real-time detection was realized for the safflower in the field. The findings can provide the technical support to develop intelligent harvesting equipment for safflower.
Keywords:image recognition  models  target detection  YOLOv8n  vanillanet  lightweighting  safflower picking
点击此处可从《农业工程学报》浏览原始摘要信息
点击此处可从《农业工程学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号