text.skipToContent text.skipToNavigation

Optimal Control Theory von Sethi, Suresh P. (eBook)

  • Verlag: Springer-Verlag
eBook (PDF)
62,95 €
inkl. gesetzl. MwSt.
Sofort per Download lieferbar

Online verfügbar

Optimal Control Theory

Optimal control methods are used to determine optimal ways to control a dynamic system. The theoretical work in this field serves as a foundation for the book, which the authors have applied to business management problems developed from their research and classroom instruction. Sethi and Thompson have provided management science and economics communities with a thoroughly revised edition of their classic text on "Optimal Control Theory". The new edition has been completely refined with careful attention to the text and graphic material presentation. Chapters cover a range of topics including finance, production and inventory problems, marketing problems, machine maintenance and replacement, problems of optimal consumption of natural resources, and applications of control theory to economics. The book contains new results that were not available when the first edition was published, as well as an expansion of the material on stochastic optimal control theory.


    Format: PDF
    Kopierschutz: AdobeDRM
    Seitenzahl: 504
    Sprache: Englisch
    ISBN: 9780387299037
    Verlag: Springer-Verlag
    Größe: 18963 kBytes
Weiterlesen weniger lesen

Optimal Control Theory

Chapter 13 Stochastic Optimal Control (p.341)

In previous chapters we assumed that the state variables of the system were known with certainty. If this were not the case, the state of the system over time would be a stochastic process. In addition, it might not be possible to measure the value of the state variables at time t. In this case, one would have to measure functions of the state variables. Moreover, the measurements are usually noisy, i.e., they are subject to errors. Thus, a decision maker is faced with the problem of making good estimates of these state variables from noisy measurements on functions of them.

The process of estimating the values of the state variables is called optimal filtering, In Section 13.1, we will discuss one particular filter, called the Kalman filter and its continuous-time analogue caUed the Kalman- Bucy filter. It should be noted that while optimal filtering provides optimal estimates of the value of the state variables from noisy measurements of related quantities, no control is involved.

When a control is involved, we are faced with a stochastic optimal control problem . Here, the state of the system is represented by a controlled stochastic process. In Section 13.2, we shall formulate a stochastic optimal control problem which is governed by stochastic differential equations. We shall only consider stochastic differential equations of a type known as Ito equations. These equations arise when the state equar tions, such as those we have seen in the previous chapters, are perturbed by Markov diffusion processes. Our goal in Section 13.2 will be to synthesize optimal feedback controls for systems subject to Ito equations in a way that maximizes the expected value of a given objective function. In Section 13.3, we shall extend the production planning model Chapter 6 to allow for some uncertain disturbances. We shall obtain an optimal production policy for the stochastic production planning problem thus formulated.

In Section 13.4, we solve an optimal stochastic advertising problem explicitly. The problem is a modification as well as a stochastic extension of the optimal control problem of the Vidale-Wolfe advertising model treated in Section 7.2.4.

In Section 13.5, we wiU introduce investment decisions in the consumption model of Example 1.3. We will consider both risk-free and risky investments. Our goal will be to find optimal consumption and investment policies in order to maximize the discounted value of the utility of consumption over time. In Section 13.6, we shall conclude the chapter by mentioning other types of stochastic optimal control problems that arise in practice. In particular, production planning problems where production is done by machines that are unreliable or failure-prone, can be formulated as stochastic optimal control problems involving jimip Markov processes. Such problems are treated in Sethi and Zhang (1994a, 1994c). Karatzas and Shreve (1998) address stochastic optimal control problems in finance involving more general stochastic processes including jimip processes.

Weiterlesen weniger lesen