stochastic optimization and discretization
play

Stochastic Optimization and Discretization January 06, 2021 P. - PowerPoint PPT Presentation

Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Stochastic Optimization and Discretization January 06, 2021 P. Carpentier Master Optimization Stochastic


  1. Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Stochastic Optimization and Discretization January 06, 2021 P. Carpentier Master Optimization — Stochastic Optimization 2020-2021 204 / 328

  2. Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result A Change in the Point of View During the first part of the course, we have studied open-loop stochastic optimization problems, that is, problems in which the decisions correspond to deterministic variables which minimize a cost function defined as an expectation. � � min j ( u , W ) . u ∈ U ad E We now enter the realm of closed-loop stochastic optimization, that is, the case where on-line information is available to the decision maker. The decisions are thus functions of information and correspond to random variables. � � min j ( U , W ) . U ∈U ad E P. Carpentier Master Optimization — Stochastic Optimization 2020-2021 205 / 328

  3. Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Variables and Constraints The decision variable U is now a random variable and belongs to a functional space U . A canonical example is: U = L 2 (Ω , A , P ; U ). The contraints U ∈ U ad on the r.v. U may be of different kinds: point-wise constraints dealing with the possible values of U : U ∈ U ad = � U ∈ U , U ( ω ) ∈ U ad P -a.s. � , risk constraints, such as expectation or probability constraints: U ∈ U ad = � � � � U ∈ U , P Θ( U ) ≤ θ ≥ π , measurability constraints which express the fact that a given amount of information Y is available to the decision maker: U ∈ U ad = � � U ∈ U , U measurable w.r.t. Y . We will mainly concentrate on measurability constraints. P. Carpentier Master Optimization — Stochastic Optimization 2020-2021 206 / 328

  4. Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Compact Formulation of a Closed-Loop Problem Given a probability space (Ω , A , P ), the essential ingredients of a stochastic optimization problem are noise W : r.v. with values in a measurable space ( W , W ), decision U : r.v. with values in a measurable space ( U , U ), information Y : r.v. with values in a measurable space ( Y , Y ), cost function: measurable mapping j : U × W → R . The σ -field generated by Y is denoted by B ⊂ A . With all these elements at hand, the problem is written as follows: � � min E j ( U , W ) . U � Y The notation U � Y (or equivalently U � B ) is used to express that the r.v. U is measurable w.r.t. to the σ -field generated by Y . P. Carpentier Master Optimization — Stochastic Optimization 2020-2021 207 / 328

  5. Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Representation of Measurability Constraints Consider the information structure of the stochastic optimization problem in a compact form, that is, the measurability constraints U � Y . This information structure may be interpreted in different ways. From the functional point of view, using a Doob’s Theorem, the decision U is expressed as a measurable function of Y : U = ϕ ( Y ) . In this setting, the decision variable becomes the function ϕ . From the algebraic point of view, the constraints are expressed in terms of σ -field, that is, � � � � σ U ⊂ σ Y . Question : how to take this last representation into account? P. Carpentier Master Optimization — Stochastic Optimization 2020-2021 208 / 328

  6. Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Dynamic Information Structure (DIS) This is the situation when B = σ ( Y ) depends on U . For example, in the case where Y = h ( U , W ), the constraint expression is U � h ( U , W ) , which yields a (seemingly) implicit measurability constraint. This is a source of huge complexity for stochastic optimization problems, known under the name of the dual effect of control. Indeed, the decision maker has to take care of the following double effect: � � on the one hand, his decision affects the cost E j ( U , W ) , on the other hand, she makes the information more or less constrained, that is, a less or more large admissible set for U . P. Carpentier Master Optimization — Stochastic Optimization 2020-2021 209 / 328

  7. Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Static Information Structure (SIS) This is the case when B = σ ( Y ) is fixed, defined independently of U . Therefore, the terminology “static” expresses that the information σ -field B constraining the decision U cannot be modified by the decision maker. It does not imply that no dynamics is present in the problem formulation. 12 The situation where the information Y is a function of a exogenous noise W , that is, Y = h ( W ), always induces a static information structure. Note that it may happen that Y functionally depends on U whereas the σ -field B generated by Y remains fixed. 12 If time is involved in the problem, at each time t , a decision U t is taken based on the available information Y t , inducing a measurability constraint U t � Y t . But the issue of dynamic information depends on the dependency of Y t w.r.t. the controls, and not on the presence of time t in the problem. P. Carpentier Master Optimization — Stochastic Optimization 2020-2021 210 / 328

  8. Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Position of the Problem. . . We want to solve a closed-loop stochastic optimization problem, that is, a problem such that the decision variable U is a random variable which satisfies measurability conditions imposed by the information structure defined by the random variable Y . We assume that the problem is dual effect free, that is, we assume that the σ -field generated by the information variable Y does not depend on the control variable U (static information structure). We manipulate the measurability conditions from the algebraic point of view, that is, σ ( U ) ⊂ σ ( Y ) = B . In order to numerically solve the optimization problem, we need to approximate the problem by using a finite representation of it. P. Carpentier Master Optimization — Stochastic Optimization 2020-2021 211 / 328

  9. Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result and Problem under Consideration The standard form of the problem we are interested in is � � V ( W , B ) = min j ( U , W ) , U ∈U E subject to U is B -measurable , where B = σ ( Y ) is a fixed σ -field. In order to obtain a numerically tractable approximation of this problem, we have to approximate the noise W by a “finite” noise W n (Monte Carlo,. . . ), the σ -field B by a “finite” σ -field B n (partition,. . . ). V ( W n , B n ) − → V ( W , B ) ? Question: P. Carpentier Master Optimization — Stochastic Optimization 2020-2021 212 / 328

  10. Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result A Specific Instance of the Problem A specific instance of the problem is the one which incorporates dynamical systems, that is, the stochastic optimal control problem: � T − 1 � � min L t ( X t , U t , W t +1 ) + K ( X T ) ( U 0 ,..., U T − 1 , X 0 ,..., X T ) E t =0 subject to X 0 = f − 1 ( W 0 ) , X t +1 = f t ( X t , U t , W t +1 ) , t = 0 , . . . , T − 1 , U t � Y t t = 0 , . . . , T − 1 . , Assuming that σ ( Y t ) are fixed σ -fields, a widely used approach to discretize this optimization problem is the so-called scenario tree method. We present it before considering the general case. P. Carpentier Master Optimization — Stochastic Optimization 2020-2021 213 / 328

  11. Stochastic Programming: the Scenario Tree Method Stochastic Optimal Control and Discretization Puzzles A General Convergence Result Lecture Outline Stochastic Programming: the Scenario Tree Method 1 Scenario Tree Method Overview Some Details about the Method Stochastic Optimal Control and Discretization Puzzles 2 Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal A General Convergence Result 3 Convergence of Random Variables Convergence of σ -Fields The Long-Awaited Convergence Theorem P. Carpentier Master Optimization — Stochastic Optimization 2020-2021 214 / 328

  12. Stochastic Programming: the Scenario Tree Method Scenario Tree Method Overview Stochastic Optimal Control and Discretization Puzzles Some Details about the Method A General Convergence Result Stochastic Programming: the Scenario Tree Method 1 Scenario Tree Method Overview Some Details about the Method Stochastic Optimal Control and Discretization Puzzles 2 Working out an Example Naive Monte Carlo-Based Discretization Scenario Tree-Based Discretization A Constructive Proposal A General Convergence Result 3 Convergence of Random Variables Convergence of σ -Fields The Long-Awaited Convergence Theorem P. Carpentier Master Optimization — Stochastic Optimization 2020-2021 215 / 328

Recommend


More recommend