order value optimization and new applications jos e mario
play

ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Jos e Mario Mart - PowerPoint PPT Presentation

ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Jos e Mario Mart nez www.ime.unicamp.br/ martinez UNICAMP, Brazil August 2, 2011 ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS


  1. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Jos´ e Mario Mart´ ınez www.ime.unicamp.br/ ∼ martinez UNICAMP, Brazil August 2, 2011

  2. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Collaborators Roberto Andreani (Applied Math - UNICAMP) Leandro Mart´ ınez (Chemistry - UNICAMP) Fl´ avio Yano (Ita´ u Bank) M´ ario Salvatierra (Fed. Univ. Amazonas) Giovane C´ esar (Applied Math - UNICAMP) Roberto Marcondes (Computer Science - USP) Paulo J. Silva (Computer Science - USP) Cibele Dunder (Ita´ u Bank) Lu´ ıs Felipe Bueno (Applied Math - UNICAMP) Lucas Garcia Pedroso (Applied Math - UNICAMP) Maria Aparecida Diniz (Applied Math - UNICAMP) Ernesto Birgin (Computer Science - USP)

  3. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Outline Introduce Order-Value Optimization problems Review of Algorithms and Convergence results Applications

  4. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Order-Value Optimization (OVO) Problems Let f i : Ω ⊂ R n → R , i = 1 , · · · , m J ⊂ { 1 , · · · , m } For all x ∈ Ω, we define i 1 ( x ) , i 2 ( x ) , . . . , i m ( x ) by f i 1 ( x ) ( x ) ≤ f i 2 ( x ) ( x ) ≤ · · · ≤ f i m ( x ) ( x ) The OVO problem is: � minimize f i j ( x ) ( x ) x ∈ Ω j ∈ J

  5. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Examples J = { m } − → min x ∈ Ω max { f 1 ( x ) , · · · , f m ( x ) } J = { 1 } − → min x ∈ Ω min { f 1 ( x ) , · · · , f m ( x ) } J = { p } − → min x ∈ Ω f i p ( x ) ( x ) (VaR-Like) p � J = { 1 , · · · , p } − → min f i j ( x ) ( x ) (LOVO) x ∈ Ω j =1 m � J = { p + 1 , · · · , m } − → min f i j ( x ) ( x ) (CVaR-Like) x ∈ Ω j = p +1 p � J = { q + 1 , · · · , p } − → min f i j ( x ) ( x ) x ∈ Ω j = q +1

  6. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Non-smoothness and Many local minimizers

  7. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS J = { m } (Minimax)

  8. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS J = { 1 } (Minimin)

  9. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS J = { p } , (VaR-like) In this example, p = 3

  10. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS J = { 1 , · · · , p } , (LOVO) p 1 � min f i j ( x ) ( x ) p x ∈ Ω j =1 In this example, p = 2

  11. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS J = { p + 1 , · · · , m } , (CVaR-like) m 1 � min f i j ( x ) ( x ) m − p x ∈ Ω j = p +1 In this example, p = 3

  12. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS J = { q + 1 , · · · , p } p 1 � min f i j ( x ) ( x ) p − q x ∈ Ω j = q +1 In this example, q = 1 , p = 4

  13. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Reformulation of CVaR-like Fact b 1 ≤ b 2 ≤ · · · ≤ b m , p ≤ m − 1 m � ⇒ b p +1 + · · · + b m = Minimum ( m − p ) ξ + max { 0 , b i − ξ } ξ ∈ R i =1 Minimizers= { ξ ∈ [ b p , b p +1 ] } ⇓ Minimize f i p +1 ( x ) ( x ) + · · · + f i m ( x ) ( x ) x ∈ Ω is equivalent to: m � Minimize ( m − p ) ξ + max { 0 , f i ( x ) − ξ } x ∈ Ω , ξ ∈ R i =1

  14. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Reformulation of VaR-like From the same “fact” Minimize f i p ( x ) ( x ) x ∈ Ω is equivalent to: Minimize ξ (with respect to x ∈ Ω and ξ ∈ R ) subject to m � ξ minimizes ( m − p ) ξ + max { 0 , f i ( x ) − ξ } ( wrt ξ ) i =1

  15. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Consequence for the Reformulations of CVaR-like and VaR-like CVaR-like is a Nonlinear Programming problem Convex if the f i are convex Linear-Programming if the f i are linear with many inequality constraints VaR-like is a Bilevel Programming problem with many Complementarity Constraints that come from the KKT conditions of the Lower-Level problem

  16. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Primal method for minimizing CVaR-like Consider ( m − p ) ξ + � m Minimize i =1 max { 0 , f i ( x ) − ξ } x ∈ Ω , ξ ∈ R Use smoothing to deal with of max and ordinary NLP for minimizing on Ω × R .

  17. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Primal Method for Minimizing VaR-like Given the current point x k ∈ Ω (convex) take a sufficient descent direction d k for all j such that f i p ( x k ) ( x k ) − ǫ ≤ f j ( x k ) ≤ f i p ( x k ) ( x k ) + ǫ Line-search along d k → x k +1 = x k + α k d k Global convergence to ( ǫ )stationary points Local Superlinear Quadratic Convex subproblems (linear or quadratic constraints)

  18. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Risk Minimization m scenarios f i ( x ) =predicted loss caused by decision x ∈ Ω under scenario i f i p ( x ) ( x ) = VaR associated with x m 1 � f i j ( x ) ( x ) = CVaR associated with x m − p j = p +1 minimize f i p ( x ) ( x ) ≡ minimize VaR x ∈ Ω m � minimize f i j ( x ) ( x ) ≡ minimize CVaR x ∈ Ω j = p +1

  19. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Low Order-Value Optimization (LOVO) Define, as always, i 1 ( x ) , . . . , i m ( x ) by: f i 1 ( x ) ( x ) ≤ · · · ≤ f i m ( x ) ( x ); p ≤ m Then, the LOVO problem is: p � minimize f i j ( x ) ( x ) x ∈ Ω j =1

  20. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Fact p p p p � � � � f i j ( x ) ( y ) ≤ f i j ( x ) ( x ) ⇒ f i j ( y ) ( y ) ≤ f i j ( x ) ( x ) j =1 j =1 j =1 j =1 ⇓ In order to decrease the LOVO function we may “fix” p � ( i 1 ( x ) , · · · , i p ( x )) and “minimize” f i j ( x ) ( y ) with respect to y . j =1

  21. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS p p � � g ( y ) = f i j ( x k ) ( y ) h ( y ) = f i j ( x k +1 ) ( y ) j =1 j =1

  22. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Methods for Unconstrained LOVO problems Line-Search At iteration k p � Find a sufficient descent direction for f i j ( x k ) ( x ) j =1 take p p f i j ( x k + α k d ) ( x k + α k d ) � � f i j ( x k ) ( x k ) < suf j =1 j =1 Global Convergence to points x ∗ such that p � f i j ( x ∗ ) ( x ∗ ) = 0 ∇ j =1

  23. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Trust-Region methods for LOVO Typical iteration Given x k , the trust region defined by ∆ and a quadratic p � approximation of f i j ( x k ) ( x ): j =1 Minimize the quadratic approximation on the trust region ∆ p � If the reduction of f i j ( x k ) ( x ) is sufficiently large with j =1 respect to the reduction of the quadratic approximation (Ared ≥ 0.1 Pred) accept the solution of the trust region subproblem as x k +1 . Otherwise, reduce ∆.

  24. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Convergence of trust-region methods for LOVO At every limit point x ∗ , p � f i j ( x ∗ ) ( x ∗ ) = 0 . ∇ j =1 Using the true Hessian to define the quadratic approximation: p ∇ 2 � f i j ( x ∗ ) ( x ∗ ) ≥ 0 j =1 Local convergence: quadratic

  25. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Fitting with LOVO Observations Model t y y j ≈ M ( x , t j ) t 1 y 1 f j ( x ) = [ y j − M ( x , t j )] 2 . . . . . . t m y m

  26. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Constrained LOVO problems p � Minimize f i j ( x ) ( x ) j =1 subject to h ( x ) = 0 , g ( x ) ≤ 0 . Augmented Lagrangian (PHR-Like) (Code Algencan in www.ime.usp.br/ ∼ egbirgin/tango) p �� 2 � 2 � � � � � f i j ( x ) ( x )+ ρ � h ( x ) + λ g ( x ) + µ � � � � � Minimize approx + � � � � 2 ρ ρ � � � + j =1 Update λ, µ ≥ 0 , ρ . ( a + = max { 0 , a } )

  27. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Convergence of Algencan-LOVO Global minimization of subproblems ⇓ Global Minimization Limit points are either feasible or stationary points of Infeasibility Feasible limit points that satisfy the CPLD constraint qualification are “KKT” Boundedness of penalty parameter

  28. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Model fitting with Algencan-LOVO Find the parameters of a Boundary Value Problem fitting a set of data that contains outliers

  29. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Fitting Nash-Equilibrium Models Nash-Equilibrium Model: Given the parameters x ∈ Ω, the players 1 , 2 , · · · , m take , simultaneously, decisions y 1 , · · · , y m . Player j takes his/her decision minimizing f j ( x , y 1 , · · · , y j − 1 , z , y j +1 , · · · , y m ) with respect to z .

  30. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Fitting Nash-Equilibrium Models Nash-Equilibrium Model: Given the parameters x ∈ Ω, the players 1 , 2 , · · · , m take , simultaneously, decisions y 1 , · · · , y m . Player j takes his/her decision minimizing f j ( x , y 1 , · · · , y j − 1 , z , y j +1 , · · · , y m ) with respect to z . Inverse Nash-Equilibrium: ¯ y 1 , · · · , ¯ y m are known Discover the parameters x .

  31. ORDER-VALUE OPTIMIZATION AND NEW APPLICATIONS Fitting Nash-Equilibrium Models Nash-Equilibrium Model: Given the parameters x ∈ Ω, the players 1 , 2 , · · · , m take , simultaneously, decisions y 1 , · · · , y m . Player j takes his/her decision minimizing f j ( x , y 1 , · · · , y j − 1 , z , y j +1 , · · · , y m ) with respect to z . Inverse Nash-Equilibrium: ¯ y 1 , · · · , ¯ y m are known Discover the parameters x . LOVO-Inverse-Nash-Equilibrium: ¯ y 1 , · · · , ¯ y m are known but only 90% of these observations are reliable.

Recommend


More recommend