reformulation of chance constrained problems using
play

Reformulation of chance constrained problems using penalty functions - PowerPoint PPT Presentation

Reformulation of chance constrained problems using penalty functions Martin Branda Charles University in Prague Faculty of Mathematics and Physics EURO XXIV July 11-14, 2010, Lisbon Martin Branda (MFF UK) Reformulation of CCP 2010 1 / 39


  1. Reformulation of chance constrained problems using penalty functions Martin Branda Charles University in Prague Faculty of Mathematics and Physics EURO XXIV July 11-14, 2010, Lisbon Martin Branda (MFF UK) Reformulation of CCP 2010 1 / 39

  2. Contents 1 Reformulations of chance constrained problems 2 Asymptotic equivalence 3 Sample approximations using Monte-Carlo techniques 4 Numerical study and comparison Martin Branda (MFF UK) Reformulation of CCP 2010 2 / 39

  3. Reformulations of chance constrained problems Contents 1 Reformulations of chance constrained problems 2 Asymptotic equivalence 3 Sample approximations using Monte-Carlo techniques 4 Numerical study and comparison Martin Branda (MFF UK) Reformulation of CCP 2010 3 / 39

  4. Reformulations of chance constrained problems Optimization problem with uncertainty In general, we consider the following program with a random factor ω min { f ( x ) : x ∈ X , g i ( x , ω ) ≤ 0 , i = 1 , . . . , k } , (1) where g i , i = 0 , . . . , k , are real functions on R n × R n ′ , X ⊆ R n and ω ∈ R n ′ is a realization of a n ′ -dimensional random vector defined on the probability space (Ω , F , P ). If P is known, we can use chance constraints to deal with the random constraints... Martin Branda (MFF UK) Reformulation of CCP 2010 4 / 39

  5. Reformulations of chance constrained problems Multiple chance constrained problem ψ ǫ = min x ∈ X f ( x ) , s . t . � g 11 ( x , ω ) ≤ 0 , . . . , g 1 k 1 ( x , ω ) ≤ 0 � ≥ 1 − ε 1 , P (2) . . . � � g m 1 ( x , ω ) ≤ 0 , . . . , g mk m ( x , ω ) ≤ 0 ≥ 1 − ε m , P with optimal solution x ǫ , where we denoted ǫ = ( ε 1 , . . . , ε m ) with levels ε j ∈ (0 , 1). The formulation covers the joint ( k 1 > 1 and m = 1) as well as the individual ( k j = 1 and m > 1) chance constrained problems as special cases. Martin Branda (MFF UK) Reformulation of CCP 2010 5 / 39

  6. Reformulations of chance constrained problems Solving CCP In general, the feasible region is not convex even if the functions are convex, it is even not easy to check feasibility because it leads to computations of multivariate integrals. Hence, we will try to reformulate the chance constrained problem using penalty functions. Martin Branda (MFF UK) Reformulation of CCP 2010 6 / 39

  7. Reformulations of chance constrained problems Consider the penalty functions ϑ j : R m → R + , j = 1 , . . . , m , continuous nondecreasing, equal to 0 on R m − and positive otherwise, e.g. k [ u i ] + � p , p ∈ N ϑ 1 , p ( u ) � � = i =1 ϑ 2 ( u ) 1 ≤ i ≤ k [ u i ] + , = max � = min t ≥ 0 : u i − t ≤ 0 , i = 1 , . . . , k } where u ∈ R k . Martin Branda (MFF UK) Reformulation of CCP 2010 7 / 39

  8. Reformulations of chance constrained problems Penalty function problem Let p j denote the penalized constraints p j ( x , ω ) = ϑ j ( g j 1 ( x , ω ) , . . . , g jk j ( x , ω )) , ∀ j Then the penalty function problem is formulated as follows m � � � ϕ N = min f ( x ) + N · E [ p j ( x , ω )] (3) x ∈ X j =1 with an optimal solution x N . In Y.M. Ermoliev, et al (2000) for ϑ 1 , 1 and m = 1. Martin Branda (MFF UK) Reformulation of CCP 2010 8 / 39

  9. Reformulations of chance constrained problems Deterministic vs. stochastic penalty method Deterministic - penalizes the infeasibility with respect to the decision vector, cf. M.S. Bazara, et al. (2006). Stochastic - penalizes violations of the constraints jointly with respect to the decision vector and to the random parameter... Martin Branda (MFF UK) Reformulation of CCP 2010 9 / 39

  10. Asymptotic equivalence Contents 1 Reformulations of chance constrained problems 2 Asymptotic equivalence 3 Sample approximations using Monte-Carlo techniques 4 Numerical study and comparison Martin Branda (MFF UK) Reformulation of CCP 2010 10 / 39

  11. Asymptotic equivalence Assumptions (brief) Under the following assumptions, the asymptotic equivalence of the problems can be shown: Continuity of constraints and probabilistic functions. Compactness of the fixed set of feasible solutions. Existence of integrable majorants . Existence of a permanently feasible solution . No assumption on linearity or convexity! Martin Branda (MFF UK) Reformulation of CCP 2010 11 / 39

  12. Asymptotic equivalence Assumptions Assume that X � = ∅ is compact, f ( x ) is a continuous function and (i) g ji ( · , ω ) , i = 1 , . . . , k j , j = 1 , . . . , m , are almost surely continuous; (ii) there exists a nonnegative random variable C ( ω ) with E [ C 1+ κ ( ω )] < ∞ for some κ > 0, such that | p j ( x , ω ) | ≤ C ( ω ) , j = 1 , . . . , m , for all x ∈ X ; ′ ∈ X ; ′ , ω )] = 0 , j = 1 , . . . , m , for some x (iii) E [ p j ( x (iv) P ( g ji ( x , ω ) = 0) = 0 , i = 1 , . . . , k j , j = 1 , . . . , m , for all x ∈ X . Martin Branda (MFF UK) Reformulation of CCP 2010 12 / 39

  13. Asymptotic equivalence Denote η = κ/ (2(1 + κ )), and for arbitrary N > 0 and ǫ ∈ (0 , 1) m put � � ε j ( x ) = p j ( x , ω ) > 0 , j = 1 , . . . , m , P m � α N ( x ) = N · E [ p j ( x , ω )] , j =1 m ε − η � β ǫ ( x ) = E [ p j ( x , ω )] , max j =1 where ε max denotes the maximum of the vector ǫ = ( ε 1 , . . . , ε m ) and [1 / N 1 /η ] = (1 / N 1 /η , . . . , 1 / N 1 /η ) is the vector of length m . THEN for any prescribed ǫ ∈ (0 , 1) m there always exists N large enough so that minimization (3) generates optimal solutions x N which also satisfy the chance constraints (2) with the given ǫ . Martin Branda (MFF UK) Reformulation of CCP 2010 13 / 39

  14. Asymptotic equivalence Moreover, bounds on the optimal value ψ ǫ of (2) based on the optimal value ϕ N of (3) and vice versa can be constructed: ϕ 1 /ε η max ( x N ) − β ǫ ( x N ) ( x ǫ ( x N ) ) ≤ ψ ǫ ( x N ) ≤ ϕ N − α N ( x N ) , ψ ǫ ( x N ) + α N ( x N ) ≤ ϕ N ≤ ψ [1 / N 1 /η ] + β [1 / N 1 /η ] ( x [1 / N 1 /η ] ) , with N → + ∞ α N ( x N ) = lim N → + ∞ ε j ( x N ) = lim ε max → 0 + β ǫ ( x ǫ ) = 0 lim for any sequences of optimal solutions x N and x ǫ . Martin Branda (MFF UK) Reformulation of CCP 2010 14 / 39

  15. Sample approximations using Monte-Carlo techniques Contents 1 Reformulations of chance constrained problems 2 Asymptotic equivalence 3 Sample approximations using Monte-Carlo techniques 4 Numerical study and comparison Martin Branda (MFF UK) Reformulation of CCP 2010 15 / 39

  16. Sample approximations using Monte-Carlo techniques Let ω 1 , . . . , ω S be an independent Monte Carlo sample of the random vector ω . Then, the sample version of the function q j is defined to be S � q S j ( x ) = S − 1 p j ( x , ω s ) � � ˆ I (0 , ∞ ) . (4) s =1 Martin Branda (MFF UK) Reformulation of CCP 2010 16 / 39

  17. Sample approximations using Monte-Carlo techniques Finally, the sample version of the multiple jointly chance constrained problem ( ?? ) is defined as ˆ ψ S = min x ∈ X f ( x ) , γ s . t . q S ˆ 1 ( x ) ≤ γ 1 , (5) . . . q S ˆ m ( x ) ≤ γ m , where the levels γ j are allowed to be different from the original levels ε j . The sample approximation of the chance constrained problem can be reformulated as a large mixed-integer nonlinear program . Martin Branda (MFF UK) Reformulation of CCP 2010 17 / 39

  18. Sample approximations using Monte-Carlo techniques Rates of convergence, sample sizes We will draw our attention to the case when the set of feasible solutions is finite , i.e. | X | < ∞ , which appears in the bounded integer programs, or infinite bounded . Using slight modification of the approach by S. Ahmed, J. Luedtke, A. Shapiro, et al. (2008, 2009), we obtain ... Martin Branda (MFF UK) Reformulation of CCP 2010 18 / 39

  19. Sample approximations using Monte-Carlo techniques Lower bound for the chance constrained problem We will assume that it holds γ j > ε j for all j . we can choose the sample size S to obtain that the feasible solution x is also feasible for the sample approximation with a probability at least 1 − δ , i.e. 2 ln m S ≥ δ , min j ∈{ 1 ,..., m } ( γ j − ε j ) 2 /ε j which corresponds to the result of S. Ahmed, et al (2008) for m = 1. Martin Branda (MFF UK) Reformulation of CCP 2010 19 / 39

  20. Sample approximations using Monte-Carlo techniques Feasibility - finite | X | We will assume that it holds γ j < ε j for all j . Then it is possible to estimate the sample size S such that the feasible solutions of the sample approximated problems are feasible for the original problem, i.e. x ∈ X ǫ , with a high probability 1 − δ 2 min j ∈{ 1 ,..., m } ( γ j − ε j ) 2 ln m | X \ X ǫ | 1 S ≥ . (6) δ If we set m = 1, we get the same inequality as J. Luedtke, et al (2008). Martin Branda (MFF UK) Reformulation of CCP 2010 20 / 39

Recommend


More recommend