Chance constrained problems: penalty reformulation and performance of sample approximation technique Martin Branda, Jitka Dupaˇ cov´ a Charles University in Prague Faculty of Mathematics and Physics SP XII Conference August 16-20, 2010, Halifax M.Branda, J.Dupaˇ cov´ a (Charles University) 2010 1 / 44
Contents 1 Reformulations of chance constrained problems 2 Asymptotic equivalence 3 Sample approximations using Monte-Carlo techniques 4 Numerical study and comparison M.Branda, J.Dupaˇ cov´ a (Charles University) 2010 2 / 44
Reformulations of chance constrained problems Contents 1 Reformulations of chance constrained problems 2 Asymptotic equivalence 3 Sample approximations using Monte-Carlo techniques 4 Numerical study and comparison M.Branda, J.Dupaˇ cov´ a (Charles University) 2010 3 / 44
Reformulations of chance constrained problems Optimization problem with uncertainty In general, we consider the following program with a random factor ω min { f ( x ) : x ∈ X , g i ( x , ω ) ≤ 0 , i = 1 , . . . , k } , (1) where g i , i = 0 , . . . , k , are real functions on R n × R n ′ , X ⊆ R n and ω ∈ R n ′ is a realization of a n ′ -dimensional random vector defined on the probability space (Ω , F , P ). If P is known, we can use chance constraints to deal with the random constraints... M.Branda, J.Dupaˇ cov´ a (Charles University) 2010 4 / 44
Reformulations of chance constrained problems Multiple chance constrained problem ψ ǫ = min x ∈ X f ( x ) , s . t . � g 11 ( x , ω ) ≤ 0 , . . . , g 1 k 1 ( x , ω ) ≤ 0 � ≥ 1 − ε 1 , P . . . � � g m 1 ( x , ω ) ≤ 0 , . . . , g mk m ( x , ω ) ≤ 0 ≥ 1 − ε m , P with optimal solution x ǫ , where we denoted ǫ = ( ε 1 , . . . , ε m ) with levels ε j ∈ (0 , 1). The formulation covers the joint ( k 1 > 1 and m = 1) as well as the individual ( k j = 1 and m > 1) chance constrained problems as special cases. M.Branda, J.Dupaˇ cov´ a (Charles University) 2010 5 / 44
Reformulations of chance constrained problems Solving chance constraned problems In general, the feasible region is not convex even if the functions are convex, it is even not easy to check feasibility because it leads to computations of multivariate integrals. Hence, we will try to reformulate the chance constrained problem using penalty functions. M.Branda, J.Dupaˇ cov´ a (Charles University) 2010 6 / 44
Reformulations of chance constrained problems Consider the penalty functions ϑ j : R m → R + , j = 1 , . . . , m , continuous nondecreasing, equal to 0 on R m − and positive otherwise, e.g. k [ u i ] + � p , p ∈ N ϑ 1 , p ( u ) � � = i =1 ϑ 2 ( u ) 1 ≤ i ≤ k [ u i ] + , = max � = min t ≥ 0 : u i − t ≤ 0 , i = 1 , . . . , k } where u ∈ R m . M.Branda, J.Dupaˇ cov´ a (Charles University) 2010 7 / 44
Reformulations of chance constrained problems Penalty function problem Let p j denote the penalized constraints p j ( x , ω ) = ϑ j ( g j 1 ( x , ω ) , . . . , g jk j ( x , ω )) , ∀ j . Then the penalty function problem is formulated as follows m � � � ϕ N = min f ( x ) + N · E [ p j ( x , ω )] x ∈ X j =1 with an optimal solution x N . In Y.M. Ermoliev, et al (2000) for ϑ 1 , 1 and m = 1. M.Branda, J.Dupaˇ cov´ a (Charles University) 2010 8 / 44
Asymptotic equivalence Contents 1 Reformulations of chance constrained problems 2 Asymptotic equivalence 3 Sample approximations using Monte-Carlo techniques 4 Numerical study and comparison M.Branda, J.Dupaˇ cov´ a (Charles University) 2010 9 / 44
Asymptotic equivalence Assumptions (brief) Under the following assumptions, the asymptotic equivalence of the problems can be shown: Continuity of the constraints and the probabilistic functions. Compactness of the fixed set of feasible solutions. Existence of integrable majorants . Existence of a permanently feasible solution . No assumption on linearity or convexity! M.Branda, J.Dupaˇ cov´ a (Charles University) 2010 10 / 44
Asymptotic equivalence Assumptions Assume that X � = ∅ is compact, f ( x ) is a continuous function and (i) g ji ( · , ω ) , i = 1 , . . . , k j , j = 1 , . . . , m , are almost surely continuous; (ii) there exists a nonnegative random variable C ( ω ) with E [ C 1+ κ ( ω )] < ∞ for some κ > 0, such that | p j ( x , ω ) | ≤ C ( ω ) , j = 1 , . . . , m , for all x ∈ X ; ′ ∈ X ; ′ , ω )] = 0 , j = 1 , . . . , m , for some x (iii) E [ p j ( x (iv) P ( g ji ( x , ω ) = 0) = 0 , i = 1 , . . . , k j , j = 1 , . . . , m , for all x ∈ X . M.Branda, J.Dupaˇ cov´ a (Charles University) 2010 11 / 44
Asymptotic equivalence Denote η = κ/ (2(1 + κ )), and for arbitrary N > 0 and ǫ ∈ (0 , 1) m put � � ε j ( x ) = P p j ( x , ω ) > 0 , j = 1 , . . . , m , m � N · α N ( x ) = E [ p j ( x , ω )] , j =1 m � ε − η β ǫ ( x ) = E [ p j ( x , ω )] , max j =1 where ε max denotes the maximal component of the vector ǫ = ( ε 1 , . . . , ε m ) and [1 / N 1 /η ] = (1 / N 1 /η , . . . , 1 / N 1 /η ) is the vector of length m . THEN for any prescribed ǫ ∈ (0 , 1) m there always exists N large enough so that minimization of the penalty objective generates optimal solutions x N which also satisfy the chance constraints with the given ǫ . M.Branda, J.Dupaˇ cov´ a (Charles University) 2010 12 / 44
Asymptotic equivalence Bounds on optimal values Moreover, bounds on the optimal value ψ ǫ based on the optimal value ϕ N and vice versa can be constructed: ϕ 1 /ε η max ( x N ) − β ǫ ( x N ) ( x ǫ ( x N ) ) ≤ ψ ǫ ( x N ) ≤ ϕ N − α N ( x N ) , ψ ǫ ( x N ) + α N ( x N ) ≤ ϕ N ≤ ψ [1 / N 1 /η ] + β [1 / N 1 /η ] ( x [1 / N 1 /η ] ) , with N → + ∞ α N ( x N ) = lim N → + ∞ ε j ( x N ) = lim ε max → 0 + β ǫ ( x ǫ ) = 0 lim for any sequences of optimal solutions x N and x ǫ . M.Branda, J.Dupaˇ cov´ a (Charles University) 2010 13 / 44
Sample approximations using Monte-Carlo techniques Contents 1 Reformulations of chance constrained problems 2 Asymptotic equivalence 3 Sample approximations using Monte-Carlo techniques 4 Numerical study and comparison M.Branda, J.Dupaˇ cov´ a (Charles University) 2010 14 / 44
Sample approximations using Monte-Carlo techniques Let ω 1 , . . . , ω S be an independent Monte Carlo sample of the random vector ω . Then, the sample version of the chance constraint is defined to be S q S j ( x ) := S − 1 � p j ( x , ω s ) � � ˆ ≤ γ j , (2) I (0 , ∞ ) s =1 γ j ∈ (0 , 1). M.Branda, J.Dupaˇ cov´ a (Charles University) 2010 15 / 44
Sample approximations using Monte-Carlo techniques Sample approximated chance constrained problem Finally, the sample version of the multiple jointly chance constrained problem is defined as ˆ ψ S = min x ∈ X f ( x ) , γ s . t . q S ˆ 1 ( x ) ≤ γ 1 , (3) . . . q S ˆ m ( x ) ≤ γ m , where the levels γ j are allowed to be different from the original levels ε j . The sample approximation of the chance constrained problem can be reformulated as a large mixed-integer nonlinear program . M.Branda, J.Dupaˇ cov´ a (Charles University) 2010 16 / 44
Sample approximations using Monte-Carlo techniques Rates of convergence, sample sizes We will draw our attention to the case when the set of feasible solutions is finite , i.e. | X | < ∞ , which appears in the bounded integer programs, or infinite bounded . Using slight modification of the approach by S. Ahmed, J. Luedtke, A. Shapiro, et al. (2008, 2009), we obtain ... M.Branda, J.Dupaˇ cov´ a (Charles University) 2010 17 / 44
Sample approximations using Monte-Carlo techniques Lower bound for the chance constrained problem We will assume that it holds γ j > ε j for all j . we can choose the sample size S to obtain that the feasible solution x is also feasible for the sample approximation with a probability at least 1 − δ , i.e. 2 ln m S ≥ δ , min j ∈{ 1 ,..., m } ( γ j − ε j ) 2 /ε j which corresponds to the result of S. Ahmed, et al (2008) for m = 1. (The estimate is based on Chernoff and Bonferroni inequalities.) M.Branda, J.Dupaˇ cov´ a (Charles University) 2010 18 / 44
Sample approximations using Monte-Carlo techniques Feasibility - finite | X | We will assume that it holds γ j < ε j for all j . Then it is possible to estimate the sample size S such that the feasible solutions of the sample approximated problems are feasible for the original problem, i.e. x ∈ X ǫ , with a high probability 1 − δ 2 min j ∈{ 1 ,..., m } ( γ j − ε j ) 2 ln m | X \ X ǫ | 1 S ≥ . (4) δ If we set m = 1, we get the same inequality as J. Luedtke, et al (2008). (The estimate is based on Hoeffding and Bonferroni inequalities.) M.Branda, J.Dupaˇ cov´ a (Charles University) 2010 19 / 44
Recommend
More recommend