Hardware-in-the-Loop basierte Funktionsentwicklung zur emissionsoptimierten Regelung von Fahrzeugantrieben mit Reinforcement Learning
To achieve global climate targets and to reduce local air pollutants, the transport sector has to make significant contributions. Modern drives evolve to electrified, networked, software-intensive overall systems with a large number of degrees of freedom. Innovations to minimize emissions and raise efficiency are increasingly made possible just by software. The effort required to develop, validate and calibrate these algorithms is increasing exponentially, while simultaneously available development times and resources are shortening. Within HELENE, a novel approach to hardware-in-the-loop based machine learning is demonstrated. Reinforcement learning is used to train specific powertrain functions in a realistic, heterogeneous virtual-real simulation scenario. By integrating both hardware and encapsulated software components with detailed physical path simulations, an environment is created for the first time that is suitable for mapping detailed effects as well as interactions between relevant powertrain components. In reinforcement learning, interaction with an only partially observable and stochastic environment plays a crucial role. Since complex interactions between software and hardware components are involved, that cannot all be simulated in the required level of detail, a heterogeneous HiL-based approach is to be pursued, which opens up a high degree of application diversity in research and development. This HiL platform is suitable for demonstrating the potential of methods such as reinforcement learning in a variety of scenarios in a resource-saving, cost-efficient and realistic way. The aim of the project is both the methodology of integration into the development process and the specific investigation of an exemplary use case.