首页 | 本学科首页   官方微博 | 高级检索  
     检索      

ATOM多注意力融合工件跟踪方法研究北大核心CSCD
引用本文:徐健,张林耀,袁皓,刘秀平,闫焕营.ATOM多注意力融合工件跟踪方法研究北大核心CSCD[J].光电子.激光,2022(10):1047-1054.
作者姓名:徐健  张林耀  袁皓  刘秀平  闫焕营
作者单位:西安工程大学 电子信息学院,陕西 西安 710048,西安工程大学 电子信息学院,陕西 西安 710048,西安工程大学 电子信息学院,陕西 西安 710048,西安工程大学 电子信息学院,陕西 西安 710048,深圳罗博泰尔机器人技术有限公司,广东 深圳 518109
基金项目:陕西省科技厅项目(2018GY-173)和西安科技局项目(GXYD7.5)资助项目
摘    要:针对工业生产复杂环境下,工件跟踪鲁棒性差且精确度低的问题,本文提出了一种基于重叠最大化精确跟踪算法(accurate tracking by overlap maximization,ATOM)的多注意力融合工件跟踪算法。该算法采用ResNet50为骨干网络,首先融入多注意力机制,使得网络更关注目标工件的关键信息;其次,使用注意力特征融合(attentional feature fusion,AFF)模块融合深层特征与浅层特征,更好地保留目标工件的语义与细节信息,以适应工业生产复杂多变的环境;最后将骨干网络第3层和第4层特征送入CSR-DCF分类器中,对得到的响应图进行融合,用以获取目标工件的粗略位置,通过状态估计网络获取精确目标框。实验表明,本文算法在OTB-2015数据集上的成功率(Success)与准确率(Precision)分别达到67.9%和85.2%;在VOT-2018数据集上的综合评分达到0.434,具有较高的精度和鲁棒性;在CCD工业相机拍摄的目标工件序列上,进一步验证了本文算法能高效应对工件跟踪过程中的常见挑战。

关 键 词:深度学习  目标跟踪  注意力机制  特征融合
收稿时间:2022/1/4 0:00:00
修稿时间:2022/2/28 0:00:00

Research on ATOM multi-attention fusion workpiece tracking method
XU Jian,ZHANG Linyao,YUAN Hao,LIU Xiuping and YAN Huanying.Research on ATOM multi-attention fusion workpiece tracking method[J].Journal of Optoelectronics·laser,2022(10):1047-1054.
Authors:XU Jian  ZHANG Linyao  YUAN Hao  LIU Xiuping and YAN Huanying
Abstract:To solve the problem of poor robustness and low accuracy of workpiece tracking in complex industrial production environment,this paper presents a multi-attention fusion workpiece tracking algorithm based on accurate tracking by overlap maximization (ATOM).The algorithm uses ResNet50 as the backbone network,fi rst incorporating a multi-attention mechanism,which makes the network pay more attention to the key information of the target workpiece.Secondly,the attention feature fusion (AFF) module is used to f use the deep and shallow features to better preserve the semantics and details of the target workpiece in order to adapt to the complex and changeable environment of industrial production.Finally,the third and fourth layers features of the backbone network are fed into the CSR-DCF classifier,and the resulting res ponse graphs are fused to obtain rough locations of target workpieces and accurate target frames through the state estimation network.Experiments show that the Success and Precision of the algorithm on OTB -2015 dataset are 67.9% and 85.2%,respectively.The overall score on VOT-2018 dataset is 0.434, which has high accuracy and robustness.On the target workpiece sequence taken by the CCD indus trial camera,the algorithm is further validated to meet the common challenges efficiently in the workpiece tr acking process.
Keywords:deep learning  target tracking  attention mechanism  feature fusion
本文献已被 维普 等数据库收录!
点击此处可从《光电子.激光》浏览原始摘要信息
点击此处可从《光电子.激光》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号