A Recurrence Relation of Dynamic Programming
We can express the minimum cost at any given stage in terms of functions of that stage alone. This leads to the embedding principle of dynamic programming.
We can express the minimum cost at any given stage in terms of functions of that stage alone. This leads to the embedding principle of dynamic programming.
The optimal control problem boils down to finding an admissible control which will cause the system to follow an admissible trajectory which will minimize the performance measure. We discuss the use of dynamic programming in optimization.
Not all conceivable values for the state and control vectors are allowable. For example, in a control system intending to drive a car, speed limits must be obeyed, the car must not run out of gasoline, and—even more fundamentally—the car must stay on the road!
To model a system, we want “the simplest mathematical description that adequately predicts the response of the system to all anticipated inputs.”
The objective of optimal control theory is to determine the control signals that will cause a process to satisfy the physical constraints and at the same time minimize (or maximize) some performance criterion.
In order to encourage myself to work more regularly on my Master’s thesis, I’ve decided to keep a more-or-less public journal about the effort.