FOLLOWUS
1.School of Vehicle and Mobility, Tsinghua University, Beijing 100084, China
2.Chinese Academy of Engineering, Beijing 100088, China
†E-mail: lihq20@mails.tsinghua.edu.cn;
‡Corresponding authors
caoc15@mails.tsinghua.edu.cn;
ydg@tsinghua.edu.cn
纸质出版日期:2023-01-0 ,
收稿日期:2022-04-03,
录用日期:2022-08-10
Scan QR Code
李惠乾, 黄晋, 曹重, 等. 基于混合强化学习的自动驾驶汽车行人避撞方法[J]. 信息与电子工程前沿(英文), 2023,24(1):131-140.
HUIQIAN LI, JIN HUANG, ZHONG CAO, et al. Stochastic pedestrian avoidance for autonomous vehicles using hybrid reinforcement learning. [J]. Frontiers of information technology & electronic engineering, 2023, 24(1): 131-140.
李惠乾, 黄晋, 曹重, 等. 基于混合强化学习的自动驾驶汽车行人避撞方法[J]. 信息与电子工程前沿(英文), 2023,24(1):131-140. DOI: 10.1631/FITEE.2200128.
HUIQIAN LI, JIN HUANG, ZHONG CAO, et al. Stochastic pedestrian avoidance for autonomous vehicles using hybrid reinforcement learning. [J]. Frontiers of information technology & electronic engineering, 2023, 24(1): 131-140. DOI: 10.1631/FITEE.2200128.
确保行人的安全对自动驾驶汽车而言至关重要,同时也具有一定挑战。经典的行人避撞策略无法应对不确定性,而基于学习的方法缺乏明确的性能保障。本文提出一种基于混合强化学习的行人避撞方法,以使自动驾驶车辆能够与具有行为不确定性的行人安全交互。该方法集成了规则策略和强化学习策略,并设计了一个激活函数选择具有更高置信度的作为最终策略,通过这种方式保证最终策略的表现不亚于规则策略。为说明所提方法的有效性,本文使用一种加速测试方法生成了行为随机的行人进行仿真验证。结果表明,该方法在测试场景中的成功率,相比基准方法的94.4%,提升至98.8%。
Ensuring the safety of pedestrians is essential and challenging when autonomous vehicles are involved. Classical pedestrian avoidance strategies cannot handle uncertainty
and learning-based methods lack performance guarantees. In this paper we propose a hybrid reinforcement learning (HRL) approach for autonomous vehicles to safely interact with pedestrians behaving uncertainly. The method integrates the rule-based strategy and reinforcement learning strategy. The confidence of both strategies is evaluated using the data recorded in the training process. Then we design an activation function to select the final policy with higher confidence. In this way
we can guarantee that the final policy performance is not worse than that of the rule-based policy. To demonstrate the effectiveness of the proposed method
we validate it in simulation using an accelerated testing technique to generate stochastic pedestrians. The results indicate that it increases the success rate for pedestrian avoidance to 98.8%
compared with 94.4% of the baseline method.
行人混合强化学习自动驾驶汽车决策
PedestrianHybrid reinforcement learningAutonomous vehiclesDecision-making
Bai HY, Cai SJ, Ye N, et al., 2015. Intention-aware online POMDP planning for autonomous driving in a crowd. IEEE Int Conf on Robotics and Automation, p.454-460. doi: 10.1109/ICRA.2015.7139219http://doi.org/10.1109/ICRA.2015.7139219
Batkovic I, Zanon M, Ali M, et al., 2019. Real-time constrained trajectory planning and vehicle control for proactive autonomous driving with road users.18th European Control Conf, p.256-262. doi: 10.23919/ECC.2019.8796099http://doi.org/10.23919/ECC.2019.8796099
Bhattacharyya A, Reino DO, Fritz M, et al., 2021. Euro-PVI: pedestrian vehicle interactions in dense urban centers. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.6404-6413. doi: 10.1109/CVPR46437.2021.00634http://doi.org/10.1109/CVPR46437.2021.00634
Bouton M, Nakhaei A, Fujimura K, et al., 2018. Scalable decision making with sensor occlusions for autonomous driving. IEEE Int Conf on Robotics and Automation, p.2076-2081.doi: 10.1109/ICRA.2018.8460914http://doi.org/10.1109/ICRA.2018.8460914
Cao Z, Yang DG, Xu SB, et al., 2021. Highway exiting planner for automated vehicles using reinforcement learning. IEEE Trans Intell Transp Syst, 22(2):990-1000. doi: 10.1109/tits.2019.2961739http://doi.org/10.1109/tits.2019.2961739
Cao Z, Xu SB, Peng HE, et al., 2022. Confidence-aware reinforcement learning for self-driving cars. IEEE Trans Intell Transp Syst, 23(7):7419-7430. doi: 10.1109/TITS.2021.3069497http://doi.org/10.1109/TITS.2021.3069497
Everett M, Chen YF, How JP, 2021. Collision avoidance in pedestrian-rich environments with deep reinforcement learning. IEEE Access, 9:10357-10377. doi: 10.1109/ACCESS.2021.3050338http://doi.org/10.1109/ACCESS.2021.3050338
Feng S, Yan XT, Sun HW, et al., 2021. Intelligent driving intelligence test for autonomous vehicles with naturalistic and adversarial environment. Nat Commun, 12(1):748. doi: 10.1038/s41467-021-21007-8http://doi.org/10.1038/s41467-021-21007-8
García J, Fernández F, 2015. A comprehensive survey on safe reinforcement learning. J Mach Learn Res, 16(1):1437-1480.
Jayaraman SK, Tilbury DM, Yang XJ, et al., 2020a. Analysis and prediction of pedestrian crosswalk behavior during automated vehicle interactions. IEEE Int Conf on Robotics and Automation, p.6426-6432. doi: 10.1109/icra40945.2020.9197347http://doi.org/10.1109/icra40945.2020.9197347
Jayaraman SK, Robert LP, Yang XJ, et al., 2020b. Efficient behavior-aware control of automated vehicles at crosswalks using minimal information pedestrian prediction model. American Control Conf, p.4362-4368. doi: 10.23919/ACC45564.2020.9147248http://doi.org/10.23919/ACC45564.2020.9147248
Kapania NR, Govindarajan V, Borrelli F, et al., 2019. A hybrid control design for autonomous vehicles at uncontrolled crosswalks. IEEE Intelligent Vehicles Symp, p.1604-1611.doi: 10.1109/IVS.2019.8814116http://doi.org/10.1109/IVS.2019.8814116
Koç M, Yurtsever E, Redmill K, et al., 2021. Pedestrian emergence estimation and occlusion-aware risk assessment for urban autonomous driving. IEEE Conf on Intelligent Transportation Systems, p.292-297. doi: 10.1109/ITSC48978.2021.9565071http://doi.org/10.1109/ITSC48978.2021.9565071
Li ZR, Gong JW, Lu C, et al., 2020. Importance weighted Gaussian process regression for transferable driver behaviour learning in the lane change scenario. IEEE Trans Veh Technol, 69(11):12497-12509. doi: 10.1109/TVT.2020.3021752http://doi.org/10.1109/TVT.2020.3021752
Li ZR, Gong JW, Lu C, et al., 2021. Interactive behavior prediction for heterogeneous traffic participants in the urban road: a graph-neural-network-based multitask learning framework. IEEE/ASME Trans Mechatron, 26(3):1339-1349.doi: 10.1109/TMECH.2021.3073736http://doi.org/10.1109/TMECH.2021.3073736
Li ZR, Lu C, Yi YT, et al., 2022a. A hierarchical framework for interactive behaviour prediction of heterogeneous traffic participants based on graph neural network. IEEE Trans Intell Transp Syst, 23(7):9102-9114. doi: 10.1109/TITS.2021.3090851http://doi.org/10.1109/TITS.2021.3090851
Li ZR, Gong J, Lu C, et al., 2022b. Personalized driver braking behavior modeling in the car-following scenario: an importance-weight-based transfer learning approach. IEEE Trans Ind Electron, 69(10):10704-10714. doi: 10.1109/TIE.2022.3146549http://doi.org/10.1109/TIE.2022.3146549
Liu Q, Li XY, Yuan SH, et al., 2021. Decision-making technology for autonomous vehicles: learning-based methods, applications and future outlook. IEEE Conf on Intelligent Transportation Systems, p.30-37. doi: 10.1109/ITSC48978.2021.9564580http://doi.org/10.1109/ITSC48978.2021.9564580
Mnih V, Kavukcuoglu K, Silver D, et al., 2015. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533.doi: 10.1038/nature14236http://doi.org/10.1038/nature14236
National Highway Traffic Safety Administration, 2019. 2018 Fatal Motor Vehicle Crashes: Overview. Traffic Safety Facts Research Note, U.S. Department of Transportation, p.1-9.
Pusse F, Klusch M, 2019. Hybrid online POMDP planning and deep reinforcement learning for safer self-driving cars. IEEE Intelligent Vehicles Symp, p.1013-1020. doi: 10.1109/IVS.2019.8814125http://doi.org/10.1109/IVS.2019.8814125
Rasouli A, Tsotsos JK, 2020. Autonomous vehicles that interact with pedestrians: a survey of theory and practice. IEEE Trans Intell Transp Syst, 21(3):900-918. doi: 10.1109/TITS.2019.2901817http://doi.org/10.1109/TITS.2019.2901817
Schratter M, Bouton M, Kochenderfer MJ, et al., 2019. Pedestrian collision avoidance system for scenarios with occlusions. IEEE Intelligent Vehicles Symp, p.1054-1060.doi: 10.1109/IVS.2019.8814076http://doi.org/10.1109/IVS.2019.8814076
Wang XP, Peng HE, Zhao D, 2019. Combining reachability analysis and importance sampling for accelerated evaluation of highly automated vehicles at pedestrian crossing. Proc ASME Dynamic Systems and Control Conf, Article V003T18A011. doi: 10.1115/DSCC2019-9179http://doi.org/10.1115/DSCC2019-9179
Yang DF, Redmill K, Özgüner Ü, 2020. A multi-state social force based framework for vehicle-pedestrian interaction in uncontrolled pedestrian crossing scenarios. IEEE Intelligent Vehicles Symp, p.1807-1812. doi: 10.1109/IV47402.2020.9304561http://doi.org/10.1109/IV47402.2020.9304561
Yurtsever E, Capito L, Redmill K, et al., 2020. Integrating deep reinforcement learning with model-based path planners for automated driving. IEEE Intelligent Vehicles Symp, p.1311-1316. doi: 10.1109/IV47402.2020.9304735http://doi.org/10.1109/IV47402.2020.9304735
Zhong YX, Cao Z, Zhu MH, et al., 2020. CLAP: cloud-and-learning-compatible autonomous driving platform. IEEE Intelligent Vehicles Symp, p.1450-1456. doi: 10.1109/IV47402.2020.9304828http://doi.org/10.1109/IV47402.2020.9304828
Zhou WT, Jiang K, Cao Z, et al., 2020. Integrating deep reinforcement learning with optimal trajectory planner for automated driving. IEEE 23rd Int Conf on Intelligent Transportation Systems, p.1-8. doi: 10.1109/ITSC45102.2020.9294275http://doi.org/10.1109/ITSC45102.2020.9294275
关联资源
相关文章
相关作者
相关机构