An energy management strategy for a super-mild hybrid electric vehicle based on a known model of reinforcement learning (Q1999313)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: An energy management strategy for a super-mild hybrid electric vehicle based on a known model of reinforcement learning |
scientific article; zbMATH DE number 7073724
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | An energy management strategy for a super-mild hybrid electric vehicle based on a known model of reinforcement learning |
scientific article; zbMATH DE number 7073724 |
Statements
An energy management strategy for a super-mild hybrid electric vehicle based on a known model of reinforcement learning (English)
0 references
26 June 2019
0 references
Summary: For global optimal control strategy, it is not only necessary to know the driving cycle in advance but also difficult to implement online because of its large calculation volume. As an artificial intelligent-based control strategy, reinforcement learning (RL) is applied to an energy management strategy of a super-mild hybrid electric vehicle. According to time-speed datasets of sample driving cycles, a stochastic model of the driver's power demand is developed. Based on the Markov decision process theory, a mathematical model of an RL-based energy management strategy is established, which assumes the minimum cumulative return expectation as its optimization objective. A policy iteration algorithm is adopted to obtain the optimum control policy that takes the vehicle speed, driver's power demand, and state of charge (SOC) as the input and the engine power as the output. Using a MATLAB/Simulink platform, CYC{\_}WVUCITY simulation model is established. The results show that, compared with dynamic programming, this method can not only adapt to random driving cycles and reduce fuel consumption of 2.4\%, but also be implemented online because of its small calculation volume.
0 references
hybrid electric vehicle
0 references
energy management
0 references
reinforcement learning
0 references