Controlled Diffusion Processes - Krylov N.V. | Libro Springer 11/1980 - HOEPLI.it


home libri books ebook dvd e film top ten sconti 0 Carrello


Torna Indietro

krylov n.v. - controlled diffusion processes

Controlled Diffusion Processes




Disponibilità: Normalmente disponibile in 15 giorni


PREZZO
155,98 €
NICEPRICE
132,58 €
SCONTO
15%



Questo prodotto usufruisce delle SPEDIZIONI GRATIS
selezionando l'opzione Corriere Veloce in fase di ordine.


Pagabile anche con App18 Bonus Cultura e Carta Docenti


Facebook Twitter Aggiungi commento


Spese Gratis

Dettagli

Genere:Libro
Lingua: Inglese
Editore:

Springer

Pubblicazione: 11/1980
Edizione: 1980





Sommario

1 Introduction to the Theory of Controlled Diffusion Processes.- 1. The Statement of Problems—Bellman’s Principle—Bellman’s Equation.- 2. Examples of the Bellman Equations—The Normed Bellman Equation.- 3. Application of Optimal Control Theory—Techniques for Obtaining Some Estimates.- 4. One-Dimensional Controlled Processes.- 5. Optimal Stopping of a One-Dimensional Controlled Process.- Notes.- 2 Auxiliary Propositions.- 1. Notation and Definitions.- 2. Estimates of the Distribution of a Stochastic Integral in a Bounded Region.- 3. Estimates of the Distribution of a Stochastic Integral in the Whole Space.- 4. Limit Behavior of Some Functions.- 5. Solutions of Stochastic Integral Equations and Estimates of the Moments.- 6. Existence of a Solution of a Stochastic Equation with Measurable Coefficients.- 7. Some Properties of a Random Process Depending on a Parameter.- 8. The Dependence of Solutions of a Stochastic Equation on a Parameter.- 9. The Markov Property of Solutions of Stochastic Equations.- 10. Ito’s Formula with Generalized Derivatives.- Notes.- 3 General Properties of a Payoff Function.- 1. Basic Results.- 2. Some Preliminary Considerations.- 3. The Proof of Theorems 1.5–1.7.- 4. The Proof of Theorems 1.8–1.11 for the Optimal Stopping Problem.- Notes.- 4 The Bellman Equation.- 1. Estimation of First Derivatives of Payoff Functions.- 2. Estimation from Below of Second Derivatives of a Payoff Function.- 3. Estimation from Above of Second Derivatives of a Payoff Function.- 4. Estimation of a Derivative of a Payoff Function with Respect to t.- 5. Passage to the Limit in the Bellman Equation.- 6. The Approximation of Degenerate Controlled Processes by Nondegenerate Ones.- 7. The Bellman Equation.- Notes.- 5 The Construction of ?-Optimal Strategies.- 1. ?-Optimal Markov Strategies and the Bellman Equation.- 2. ?-Optimal Markov Strategies. The Bellman Equation in the Presence of Degeneracy.- 3. The Payoff Function and Solution of the Bellman Equation: The Uniqueness of the Solution of the Bellman Equation.- Notes.- 6 Controlled Processes with Unbounded Coefficients: The Normed Bellman Equation.- 1. Generalization of the Results Obtained in Section 3.1.- 2. General Methods for Estimating Derivatives of Payoff Functions.- 3. The Normed Bellman Equation.- 4. The Optimal Stopping of a Controlled Process on an Infinite Interval of Time.- 5. Control on an Infinite Interval of Time.- Notes.- Appendices.- 1. Some Properties of Stochastic Integrals.- 2. Some Properties of Submartingales.




Trama

Stochastic control theory is a relatively young branch of mathematics. The beginning of its intensive development falls in the late 1950s and early 1960s. During that period an extensive literature appeared on optimal stochastic control using the quadratic performance criterion (see references in W onham [76J). At the same time, Girsanov [25J and Howard [26J made the first steps in constructing a general theory, based on Bellman's technique of dynamic programming, developed by him somewhat earlier [4J. Two types of engineering problems engendered two different parts of stochastic control theory. Problems of the first type are associated with multistep decision making in discrete time, and are treated in the theory of discrete stochastic dynamic programming. For more on this theory, we note in addition to the work of Howard and Bellman, mentioned above, the books by Derman [8J, Mine and Osaki [55J, and Dynkin and Yushkevich [12]. Another class of engineering problems which encouraged the development of the theory of stochastic control involves time continuous control of a dynamic system in the presence of random noise. The case where the system is described by a differential equation and the noise is modeled as a time continuous random process is the core of the optimal control theory of diffusion processes. This book deals with this latter theory.







Altre Informazioni

ISBN:

9780387904610

Condizione: Nuovo
Collana: Stochastic Modelling and Applied Probability
Dimensioni: 235 x 155 mm Ø 670 gr
Formato: Copertina rigida
Pagine Arabe: 308
Pagine Romane: xii
Traduttore: Aries, A.B.






Utilizziamo i cookie di profilazione, anche di terze parti, per migliorare la navigazione, per fornire servizi e proporti pubblicità in linea con le tue preferenze. Se vuoi saperne di più o negare il consenso a tutti o ad alcuni cookie clicca qui. Chiudendo questo banner o proseguendo nella navigazione acconsenti all’uso dei cookie.

X