TAILIEUCHUNG - AN INTRODUCTION TO MATHEMATICAL OPTIMAL CONTROL THEORY VERSION 0.1

These notes build upon a course I taught at the University of Maryland during the fall of 1983. My great thanks go to Martino Bardi, who took careful notes, saved them all these years and recently mailed them to me. Faye Yeager typed up his notes into a first draft of these lectures as they now appear. | AN INTRODUCTION TO MATHEMATICAL OPTIMAL CONTROL THEORY VERSION By Lawrence C. Evans Department of Mathematics UNivERSiTy of California BERKELEy Chapter 1 Introduction Chapter 2 Controllability bang-bang principle Chapter 3 Linear time-optimal control Chapter 4 The Pontryagin Maximum Principle Chapter 5 Dynamic programming Chapter 6 Game theory Chapter 7 Introduction to stochastic control theory Appendix Proofs of the Pontryagin Maximum Principle Exercises References 1 PREFACE These notes build upon a course I taught at the University of Maryland during the fall of 1983. My great thanks go to Martino Bardi who took careful notes saved them all these years and recently mailed them to me. Faye Yeager typed up his notes into a first draft of these lectures as they now appear. I have radically modified much of the notation to be consistent with my other writings updated the references added several new examples and provided a proof of the Pontryagin Maximum Principle. As this is a course for undergraduates I have dispensed in certain proofs with various measurability and continuity issues and as compensation have added various critiques as to the lack of total rigor. Scott Armstrong read over the notes and suggested many improvements thanks. This current version of the notes is not yet complete but meets I think the usual high standards for material posted on the internet. Please email me at evans@ with any corrections or comments. 2 CHAPTER 1 INTRODUCTION . The basic problem . Some examples . A geometric solution . Overview THE BASIC PROBLEM. DYNAMICS. We open our discussion by considering an ordinary differential equation ODE having the form x t f x t x 0 x0. We are here given the initial point x0 G R and the function f R Rn. The unknown is the curve x 0 x Rn which we interpret as the dynamical evolution of the state of some system . 1-1 t 0 CONTROLLED DYNAMICS. We generalize a bit and suppose now that f depends also upon some .

Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.