Heuristic Search and Deep Learning
The development of transient control functions represents a major development effort, especially for highly complex, strongly non-linear systems such as that of a combustion engine. The need to consider many independent parameters also complicates the optimization process, making methodical approaches in addition to pure domain expertise a useful support.
In the project "Heuristic Search and Deep Learning", a Deep Reinforcement Learning (RL) agent learns these transient control strategies taking into account predefined target variables such as minimized fuel consumption and good drivability. In general, this is done by the interaction of the RL agent with its environment. The experience obtained as a result, in the form of actions, states, and rewards, enables the agent to learn a strategy that maximizes the cumulative reward along the trajectories. The environment used in this project is a 1D flow-sumlation model of a supercharged 3-cylinder gasoline engine, which sufficiently represents the basic physical properties. GT-Suite was used as simulation software. The framework developed for this purpose allows fully automated training of the Deep RL agent based on arbitrary physical models. An overview of the framework coupled with the GT model is shown in the following figure.
The complexity of the control task by the agent is to be successively increased over the course of the project, making the approach transferable to other optimal control problems with multi-dimensional, partly interacting control signals.
By validating the developed methodology, Heuristic Search and Deep Learning contributes to the increase of efficiency for the development and design of control functions of physical systems. Due to the generic character of the framework, it can be applied to other physical systems for the development of optimized control strategies.