Нашли опечатку? Выделите ее мышкой и нажмите Ctrl+Enter
Название: Dynamic programming and optimal control
Автор: Bertsekas D.
Аннотация:
The first of the two volumes of the leading and most up-to-date textbook on the far-ranging algorithmic methododogy of Dynamic Programming, which can be used for optimal control, Markovian decision problems, planning and sequential decision making under uncertainty, and discrete/combinatorial optimization. The treatment focuses on basic unifying themes, and conceptual foundations. It illustrates the versatility, power, and generality of the method with many examples and applications from engineering, operations research, and other fields. It also addresses extensively the practical application of the methodology, possibly through the use of approximations, and provides an introduction to the far-reaching methodology of Neuro-Dynamic Programming. The first volume is oriented towards modeling, conceptualization, and finite-horizon problems, but also includes a substantive introduction to infinite horizon problems that is suitable for classroom use. The second volume is oriented towards mathematical analysis and computation, and treats infinite horizon problems extensively. The text contains many illustrations, worked-out examples, and exercises.