首页 | 本学科首页   官方微博 | 高级检索  
     

基于改进YOLOv3网络的无人车夜间环境感知
引用本文:裴嘉欣,孙韶媛,王宇岚,李大威,黄荣. 基于改进YOLOv3网络的无人车夜间环境感知[J]. 应用光学, 2019, 40(3): 380-386. DOI: 10.5768/JAO201940.0301004
作者姓名:裴嘉欣  孙韶媛  王宇岚  李大威  黄荣
作者单位:1.东华大学 信息科学与技术学院,上海 201620
基金项目:上海市科委基础研究项目15JC1400600国家青年自然科学基金61603089上海市青年科技英才扬帆计划16YF1400100
摘    要:环境感知是无人车夜间行驶中的一项关键任务,提出一种改进的YOLOv3网络,以实现夜间对无人车获取的红外图像中行人、车辆的检测,将判断周边车辆的行驶方向问题转化为预测车辆位置的角度大小问题,并与深度估计信息进行融合对周边车辆行驶的距离和速度作出判断,从而实现夜间无人车对周边车辆行驶意图的感知。该网络具有端到端的优点,能实现整张图像作为网络的输入,直接在输出层回归检测目标的边界框位置、所属的类别和车辆的角度预测结果,并和深度估计信息融合得到周边车辆的距离和速度信息。实验结果表明,使用改进的YOLOv3网络对夜间无人车获取的红外图像进行目标检测的时间为0.04 s/帧,角度和速度预测效果较好,准确性和实时性达到了实际应用要求。

关 键 词:红外图像  目标检测  YOLOv3网络  角度预测  深度估计
收稿时间:2018-11-26

Nighttime environment perception of driverless vehicles based on improved YOLOv3 network
PEI Jiaxin,SUN Shaoyuan,WANG Yulan,LI Dawei,HUANGRong. Nighttime environment perception of driverless vehicles based on improved YOLOv3 network[J]. Journal of Applied Optics, 2019, 40(3): 380-386. DOI: 10.5768/JAO201940.0301004
Authors:PEI Jiaxin  SUN Shaoyuan  WANG Yulan  LI Dawei  HUANGRong
Affiliation:1.College of Information Science and Technology, Donghua University, Shanghai 201620, China2.Engineering Research Center of Digitized Textile & Fashion Technology(Ministry of Education Donghua University, Shanghai 201620, China
Abstract:Environmental perception is a key task of driverless vehicles at night. An improved YOLOv3 network was proposed to realize the detection of pedestrians and vehicles in infrared images captured by driverless vehicles at night. The problem of estimation of the moving direction of surrounding vehicles is transformed into the problem of estimation of the angle of the surrounding vehicle position. What's more, the network is fused with the depth estimation information to estimate the distance and speed of the surrounding vehicles. Therefore the driverless vehicles can obtain the driving intention of the surrounding vehicles at night. The network has the end-to-end advantage, in which an image is as the input, and the positions of the bounding boxes, the classes and the angle estimation results of the detecting targets are returned directly at the output layer. Moreover, the depth estimation information is combined with the above information to obtain the distance and speed of the surrounding vehicle. The experimental results show that the speed of target detection in the infrared images captured by driverless vehicle is 0.04 s/frame. The effect of angle and speed prediction is good, and the accuracy and real-time performance meet the requirements of practical application.
Keywords:infrared image  target detection  YOLOv3 network  angleprediction  depth estimation
本文献已被 CNKI 维普 等数据库收录!
点击此处可从《应用光学》浏览原始摘要信息
点击此处可从《应用光学》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号