FOLLOWUS
Beijing Advanced Innovation Center for Big Data and Brain Computing, School of Transportation Science and Engineering, Beihang University, Beijing 100191, China
[ "Yunpeng WANG, E-mail: ypwang@buaa.edu.cn" ]
Kunxian ZHENG, E-mail: zhengkunxian@buaa.edu.cn
[ "Daxin TIAN, E-mail: dtian@buaa.edu.cn" ]
[ "Xuting DUAN, E-mail: duanxuting@buaa.edu.cn" ]
纸质出版日期:2021-05,
收稿日期:2019-11-20,
修回日期:2021-02-03,
Scan QR Code
王云鹏, 郑坤贤, 田大新, 等. 面向强化学习自动驾驶模型的异步监督学习预训练方法[J]. 信息与电子工程前沿(英文), 2021,22(5):673-686.
YUNPENG WANG, KUNXIAN ZHENG, DAXIN TIAN, et al. Pre-training with asynchronous supervised learning for reinforcement learning based autonomous driving. [J]. Frontiers of information technology & electronic engineering, 2021, 22(5): 673-686.
王云鹏, 郑坤贤, 田大新, 等. 面向强化学习自动驾驶模型的异步监督学习预训练方法[J]. 信息与电子工程前沿(英文), 2021,22(5):673-686. DOI: 10.1631/FITEE.1900637.
YUNPENG WANG, KUNXIAN ZHENG, DAXIN TIAN, et al. Pre-training with asynchronous supervised learning for reinforcement learning based autonomous driving. [J]. Frontiers of information technology & electronic engineering, 2021, 22(5): 673-686. DOI: 10.1631/FITEE.1900637.
基于人定规则所设计的自动驾驶系统可能会因大规模相互耦合的规则而变得越来越复杂,因此许多研究人员致力于探索基于学习的解决方案。强化学习(reinforcement learning,RL)因其在各种顺序控制问题上的出色表现而被应用于自动驾驶系统设计。然而,基于RL的自动驾驶系统落地应用所面临的主要挑战是其初始性能不佳。强化学习训练需要大量训练数据,然后模型才能达到合理的性能要求,这使得基于强化学习的模型不适用于现实环境,尤其在数据昂贵的情况下。本文为基于强化学习的端到端自动驾驶模型提出一种异步监督学习(asynchronous supervised learning,ASL)方法,以解决在实际环境中训练基于强化学习模型时初始性能差的问题。具体而言,通过在多个驾驶演示数据集上并行且异步执行多个监督学习过程,在异步监督学习预训练阶段引入先验知识。经过预训练后,模型将被部署到真实车辆上进一步开展强化学习训练,以适应实际环境并不断突破性能极限。本文在赛车模拟器TORCS(The Open Racing Car Simulator)上对所提出的预训练方法进行评估,以验证该方法在改善强化学习训练阶段端到端自动驾驶模型的初始性能和收敛速度方面足够可靠。此外,建立一个实车验证系统,以验证所提预训练方法在实车部署中的可行性。仿真结果表明,在有监督的预训练阶段使用一些演示,可以显著提高强化学习训练阶段的初始性能和收敛速度。
Rule-based autonomous driving systems may suffer from increased complexity with large-scale inter-coupled rules
so many researchers are exploring learning-based approaches. Reinforcement learning (RL) has been applied in designing autonomous driving systems because of its outstanding performance on a wide variety of sequential control problems. However
poor initial performance is a major challenge to the practical implementation of an RL-based autonomous driving system. RL training requires extensive training data before the model achieves reasonable performance
making an RL-based model inapplicable in a real-world setting
particularly when data are expensive. We propose an asynchronous supervised learning (ASL) method for the RL-based end-to-end autonomous driving model to address the problem of poor initial performance before training this RL-based model in real-world settings. Specifically
prior knowledge is introduced in the ASL pre-training stage by asynchronously executing multiple supervised learning processes in parallel
on multiple driving demonstration data sets. After pre-training
the model is deployed on a real vehicle to be further trained by RL to adapt to the real environment and continuously break the performance limit. The presented pre-training method is evaluated on the race car simulator
TORCS (The Open Racing Car Simulator)
to verify that it can be sufficiently reliable in improving the initial performance and convergence speed of an end-to-end autonomous driving model in the RL training stage. In addition
a real-vehicle verification system is built to verify the feasibility of the proposed pre-training method in a real-vehicle deployment. Simulations results show that using some demonstrations during a supervised pre-training stage allows significant improvements in initial performance and convergence speed in the RL training stage.
自主驾驶自动驾驶车辆强化学习监督学习
Self-drivingAutonomous vehiclesReinforcement learningSupervised learning
ZW Bai, , , W Shangguan, , , BG Cai, , , 等. . Deep reinforcement learning based high-level driving behavior decision-making model in heterogeneous traffic. . Proc Chinese Control Conf, , 2019. . p.8600--8605. . DOI:10.23919/ChiCC.2019.8866005http://doi.org/10.23919/ChiCC.2019.8866005..
M Bojarski, , , D Del Testa, , , D Dworakowski, , , 等. . End to end learning for self-driving cars, , 2016. . https://arxiv.org/abs/1604.07316https://arxiv.org/abs/1604.07316, , ..
T Brys, , , A Harutyunyan, , , HB Suay, , , 等. . Reinforcement learning from demonstration through shaping. . Proc 24th Int Conf on Artificial Intelligence, , 2015. . p. 3352--3358. . ..
CY Chen, , , A Seff, , , A Kornhauser, , , 等. . DeepDriving: learning affordance for direct perception in autonomous driving. . Proc IEEE Int Conf on Computer Vision, , 2015. . p.2722--2730. . DOI:10.1109/ICCV.2015.312http://doi.org/10.1109/ICCV.2015.312..
JY Chen, , , BD Yuan, , , M Tomizuka. . Model-free deep reinforcement learning for urban autonomous driving. . Proc IEEE Intelligent Transportation Systems Conf, , 2019. . p. 2765--2771. . DOI:10.1109/ITSC.2019.8917306http://doi.org/10.1109/ITSC.2019.8917306..
F Codevilla, , , M Mller, , , A Lpez, , , 等. . End-to-end driving via conditional imitation learning. . Proc IEEE Int Conf on Robotics and Automation, , 2018. . p. 4693--4700. . DOI:10.1109/ICRA.2018.8460487http://doi.org/10.1109/ICRA.2018.8460487..
GVJr de la Cruz, , , YS Du, , , ME Taylor. . Pre-training with non-expert human demonstration for deep reinforcement learning. . Knowl Eng Rev, , 2019. . 34e10DOI:10.1017/S0269888919000055http://doi.org/10.1017/S0269888919000055..
D Gonzlez, , , J Prez, , , V Milans, , , 等. . A review of motion planning techniques for automated vehicles. . IEEE Trans Intell Transp Syst, , 2016. . 17((4):):1135--1145. . DOI:10.1109/TITS.2015.2498841http://doi.org/10.1109/TITS.2015.2498841..
W Hao, , , YJ Lin, , , Y Cheng, , , 等. . Signal progression model for long arterial: intersection grouping and coordination. . IEEE Access, , 2018. . 630128--30136. . DOI:10.1109/ACCESS.2018.2843324http://doi.org/10.1109/ACCESS.2018.2843324..
KM He, , , J Sun. . Convolutional neural networks at constrained time cost. . Proc IEEE Conf on Computer Vision and Pattern Recognition, , 2015. . p. 5353--5360. . DOI:10.1109/CVPR.2015.7299173http://doi.org/10.1109/CVPR.2015.7299173..
Y He, , , N Zhao, , , HX Yin. . Integrated networking, caching, and computing for connected vehicles: a deep reinforcement learning approach. . IEEE Trans Veh Technol, , 2018. . 67((1):):44--55. . DOI:10.1109/TVT.2017.2760281http://doi.org/10.1109/TVT.2017.2760281..
L Li, , , YS Lv, , , FY Wang. . Traffic signal timing via deep reinforcement learning. . IEEE/CAA J Autom Sin, , 2016. . 3((3):):247--254. . DOI:10.1109/JAS.2016.7508798http://doi.org/10.1109/JAS.2016.7508798..
LZ Li, , , K Ota, , , MX Dong. . Humanlike driving: empirical decision-making system for autonomous vehicles. . IEEE Trans Veh Technol, , 2018. . 67((8):):6814--6823. . DOI:10.1109/TVT.2018.2822762http://doi.org/10.1109/TVT.2018.2822762..
N Liu, , , Z Li, , , JL Xu, , , 等. . A hierarchical framework of cloud resource allocation and power management using deep reinforcement learning. . Proc IEEE 37th Int Conf on Distributed Computing Systems, , 2017. . p. 372--382. . DOI:10.1109/ICDCS.2017.123http://doi.org/10.1109/ICDCS.2017.123..
HZ Mao, , , M Alizadeh, , , I Menache, , , 等. . Resource management with deep reinforcement learning. . Proc 15th ACM Workshop on Hot Topics in Networks, , 2016. . p. 50--56. . DOI:10.1145/3005745.3005750http://doi.org/10.1145/3005745.3005750..
V Mnih, , , K Kavukcuoglu, , , D Silver, , , 等. . Playing Atari with deep reinforcement learning, , 2013. . https://arxiv.org/abs/1312.5602https://arxiv.org/abs/1312.5602, , ..
V Mnih, , , K Kavukcuoglu, , , D Silver, , , 等. . Human-level control through deep reinforcement learning. . Nature, , 2015. . 518((7540):):529--533. . DOI:10.1038/nature14236http://doi.org/10.1038/nature14236..
V Mnih, , , AP Badia, , , M Mirza, , , 等. . Asynchronous methods for deep reinforcement learning. . Proc 33rd Int Conf on Machine Learning, , 2016. . p. 1928--1937. . ..
A Nair, , , P Srinivasan, , , S Blackwell, , , 等. . Massively parallel methods for deep reinforcement learning, , 2015. . https://arxiv.org/abs/1507.04296https://arxiv.org/abs/1507.04296, , ..
A Nair, , , B McGrew, , , M Andrychowicz, , , 等. . Overcoming exploration in reinforcement learning with demonstrations, , 2018. . https://arxiv.org/abs/1709.10089https://arxiv.org/abs/1709.10089, , ..
B Paden, , , M p, , , SZ Yong, , , 等. . A survey of motion planning and control techniques for self-driving urban vehicles. . IEEE Trans Intell Veh, , 2016. . 1((1):):33--55. . DOI:10.1109/TIV.2016.2578706http://doi.org/10.1109/TIV.2016.2578706..
CR Qiu, , , Y Hu, , , Y Chen, , , 等. . Deep deterministic policy gradient (DDPG)-based energy harvesting wireless communications. . IEEE Int Things J, , 2019. . 6((5):):8577--8588. . DOI:10.1109/JIOT.2019.2921159http://doi.org/10.1109/JIOT.2019.2921159..
AE Sallab, , , M Abdou, , , E Perot, , , 等. . Deep reinforcement learning framework for autonomous driving. . Electron Imag, , 2017. . 2017((19):):70--76. . DOI:10.2352/ISSN.2470-1173.2017.19.AVM-023http://doi.org/10.2352/ISSN.2470-1173.2017.19.AVM-023..
W Schwarting, , , J Alonso-Mora, , , D Rus. . Planning and decision-making for autonomous vehicles. . Ann Rev Contr Robot Auton Syst, , 2018. . 1187--210. . DOI:10.1146/annurev-control-060117-105157http://doi.org/10.1146/annurev-control-060117-105157..
RR Selvaraju, , , M Cogswell, , , A Das, , , 等. . Grad-CAM: visual explanations from deep networks via gradientbased localization. . Int J Comput Vis, , 2019. . 128((8):):336--359. . DOI:10.1007/s11263-019-01228-7http://doi.org/10.1007/s11263-019-01228-7..
D Silver, , , J Schrittwieser, , , K Simonyan, , , 等. . Mastering the game of Go without human knowledge. . Nature, , 2017. . 550((7676):):354--359. . DOI:10.1038/nature24270http://doi.org/10.1038/nature24270..
ME Taylor, , , P Stone. . Transfer learning for reinforcement learning domains: a survey. . J Mach Learn Res, , 2009. . 101633--1685. . ..
YP Wang, , , KX Zheng, , , DX Tian, , , 等. . Cooperative channel assignment for VANETs based on multiagent reinforcement learning. . Front Inform Technol Electron Eng, , 2020. . 21((7):):1047--1058. . DOI:10.1631/FITEE.1900308http://doi.org/10.1631/FITEE.1900308..
ZY Xu, , , YZ Wang, , , J Tang, , , 等. . A deep reinforcement learning based framework for power-efficient resource allocation in cloud RANs. . Proc IEEE Int Conf on Communications, , 2017. . p.1--6. . DOI:10.1109/ICC.2017.7997286http://doi.org/10.1109/ICC.2017.7997286..
XQ Zhang, , , HM Ma. . Pretraining deep actor-critic reinforcement learning algorithms with expert demonstrations, , 2018. . https://arxiv.org/abs/1801.10459https://arxiv.org/abs/1801.10459, , ..
BL Zhou, , , A Khosla, , , A Lapedriza, , , 等. . Learning deep features for discriminative localization. . Proc IEEE Conf on Computer Vision and Pattern Recognition, , 2016. . p.2921--2929. . DOI:10.1109/CVPR.2016.319http://doi.org/10.1109/CVPR.2016.319..
关联资源
相关文章
相关作者
相关机构