extending nearly linear models
play

Extending Nearly-Linear Models Chiara Corsato, Renato Pelessoni and - PowerPoint PPT Presentation

UNIVERSIT DEGLI STUDI DI TRIESTE Extending Nearly-Linear Models Chiara Corsato, Renato Pelessoni and Paolo Vicig University of Trieste, Italy ISIPTA 2019 Gent July 6, 2019 Outline Motivations Nearly-Linear Models Definitions and


  1. UNIVERSITÀ DEGLI STUDI DI TRIESTE Extending Nearly-Linear Models Chiara Corsato, Renato Pelessoni and Paolo Vicig University of Trieste, Italy ISIPTA 2019 Gent July 6, 2019

  2. Outline Motivations Nearly-Linear Models ● Definitions and basic properties ● Various types of natural extensions Results postponed to the Poster Session

  3. Motivations ● NL Models include several Neighbourhood Models: ● NL Models may elicit various beliefs, even conflicting ones.

  4. Nearly-Linear Models NL Models are simple functions of a given probability P 0 : Definition ( Corsato, Pelessoni, Vicig, 2019 ) µ ∶ A( P ) → R is a Nearly-Linear (NL) imprecise probability if ● µ (∅) = 0, µ ( Ω ) = 1 ● given P 0 on A( P ) , a ∈ R , b > 0, ∀ A ∈ A( P ) ∖ {∅ , Ω } , µ ( A ) = min { max { bP 0 ( A ) + a , 0 } , 1 } = max { min { bP 0 ( A ) + a , 1 } , 0 } . We denote µ by NL ( a , b ) .

  5. Nearly-Linear Models NL Models are simple functions of a given probability P 0 : Definition ( Corsato, Pelessoni, Vicig, 2019 ) µ ∶ A( P ) → R is a Nearly-Linear (NL) imprecise probability if ● µ (∅) = 0, µ ( Ω ) = 1 ● given P 0 on A( P ) , a ∈ R , b > 0, ∀ A ∈ A( P ) ∖ {∅ , Ω } , µ ( A ) = min { max { bP 0 ( A ) + a , 0 } , 1 } = max { min { bP 0 ( A ) + a , 1 } , 0 } . We denote µ by NL ( a , b ) . A NL µ is a linear affine transformation of P 0 , with barriers .

  6. Nearly-Linear Models - 2 ● The family of NL imprecise probabilities is self-conjugate: if µ is NL ( a , b ) , then µ c is NL ( c , b ) , with c = 1 − ( a + b ) .

  7. Nearly-Linear Models - 2 ● The family of NL imprecise probabilities is self-conjugate: if µ is NL ( a , b ) , then µ c is NL ( c , b ) , with c = 1 − ( a + b ) . NL ( a , b ) lower probability P NL ( c , b ) upper probability P Definition A Nearly-Linear Model is a couple ( P , P ) , where P ∶ A( P ) → R is a NL lower probability and P is its conjugate.

  8. Nearly-Linear Models - 2 ● The family of NL imprecise probabilities is self-conjugate: if µ is NL ( a , b ) , then µ c is NL ( c , b ) , with c = 1 − ( a + b ) . NL ( a , b ) lower probability P NL ( c , b ) upper probability P Definition A Nearly-Linear Model is a couple ( P , P ) , where P ∶ A( P ) → R is a NL lower probability and P is its conjugate. ● b + 2 a ≤ 1 � → P NL ( a , b ) 2-coherent (minimal consistency property).

  9. Nearly-Linear Models - 3 We have classified 2-coherent Nearly-Linear Models into 3 subfamilies: 1. Vertical Barrier Model (VBM) 2. Horizontal Barrier Model (HBM) 3. Restricted Range Model (RRM) and studied their consistency properties ( Corsato, Pelessoni, Vicig, 2019 ).

  10. Nearly-Linear Models - 3 We have classified 2-coherent Nearly-Linear Models into 3 subfamilies: 1. Vertical Barrier Model (VBM) 2. Horizontal Barrier Model (HBM) 3. Restricted Range Model (RRM) and studied their consistency properties ( Corsato, Pelessoni, Vicig, 2019 ). Aim Find manageable formulae for natural extensions of NL Models.

  11. Nearly-Linear Models - 3 We have classified 2-coherent Nearly-Linear Models into 3 subfamilies: 1. Vertical Barrier Model (VBM) 2. Horizontal Barrier Model (HBM) 3. Restricted Range Model (RRM) and studied their consistency properties ( Corsato, Pelessoni, Vicig, 2019 ). Aim Find manageable formulae for natural extensions of NL Models.

  12. Vertical Barrier Model (VBM) Parameters a ≤ 0 , 0 ≤ a + b ≤ 1 , c = 1 − ( a + b )( ≥ 0 )

  13. Vertical Barrier Model (VBM) Parameters a ≤ 0 , 0 ≤ a + b ≤ 1 , c = 1 − ( a + b )( ≥ 0 ) P ( A ) = max { bP 0 ( A ) + a , 0 } , ∀ A ∈ A( P ) ∖ { Ω } , P ( A ) = min { bP 0 ( A ) + c , 1 } , ∀ A ∈ A( P ) ∖ {∅} , with P ( Ω ) = 1 , P (∅) = 0.

  14. Vertical Barrier Model (VBM) Parameters a ≤ 0 , 0 ≤ a + b ≤ 1 , c = 1 − ( a + b )( ≥ 0 ) P ( A ) = max { bP 0 ( A ) + a , 0 } , ∀ A ∈ A( P ) ∖ { Ω } , P ( A ) = min { bP 0 ( A ) + c , 1 } , ∀ A ∈ A( P ) ∖ {∅} , with P ( Ω ) = 1 , P (∅) = 0. P is coherent and 2-monotone ( P is coherent and 2-alternating).

  15. VBM and natural extensions - 1 Proposition (VBM as a natural extension) The lower probability in the VBM expression for P , Q ( A ) = bP 0 ( A ) + a , ∀ A ∈ A( P ) , ● avoids sure loss; ● is convex iff b = 1.

  16. VBM and natural extensions - 1 Proposition (VBM as a natural extension) The lower probability in the VBM expression for P , Q ( A ) = bP 0 ( A ) + a , ∀ A ∈ A( P ) , ● avoids sure loss; ● is convex iff b = 1. Its natural extension on A( P ) is precisely the lower probability P of the VBM itself.

  17. VBM and natural extensions - 1 Proposition (VBM as a natural extension) The lower probability in the VBM expression for P , Q ( A ) = bP 0 ( A ) + a , ∀ A ∈ A( P ) , ● avoids sure loss; ● is convex iff b = 1. Its natural extension on A( P ) is precisely the lower probability P of the VBM itself. A VBM is a correction of Q via natural extension.

  18. VBM and natural extensions - 2 P ∶ A( P ) → R lower probability of a VBM is coherent and 2-monotone.

  19. VBM and natural extensions - 2 P ∶ A( P ) → R lower probability of a VBM is coherent and 2-monotone. Proposition (Natural extension of a VBM) E P 0 natural extension of P 0 on L( P ) .

  20. VBM and natural extensions - 2 P ∶ A( P ) → R lower probability of a VBM is coherent and 2-monotone. Proposition (Natural extension of a VBM) E P 0 natural extension of P 0 on L( P ) . ● If a < 0, for any X ∈ L( P ) define x = sup { x ∈ R ∶ P 0 ( X > x ) ≥ − a b } . ˜ Then E ( X ) = ( a + b ) ˜ x + ( 1 − ( a + b )) inf X − bE P 0 (( ˜ x − X ) + ) .

  21. VBM and natural extensions - 2 P ∶ A( P ) → R lower probability of a VBM is coherent and 2-monotone. Proposition (Natural extension of a VBM) E P 0 natural extension of P 0 on L( P ) . ● If a < 0, for any X ∈ L( P ) define x = sup { x ∈ R ∶ P 0 ( X > x ) ≥ − a b } . ˜ Then E ( X ) = ( a + b ) ˜ x + ( 1 − ( a + b )) inf X − bE P 0 (( ˜ x − X ) + ) . ● If a = 0, E ( X ) = ( 1 − b ) inf X + bE P 0 ( X ) .

  22. VBM and natural extensions - 2 P ∶ A( P ) → R lower probability of a VBM is coherent and 2-monotone. Proposition (Natural extension of a VBM) E P 0 natural extension of P 0 on L( P ) . ● If a < 0, for any X ∈ L( P ) define x = sup { x ∈ R ∶ P 0 ( X > x ) ≥ − a b } . ˜ Then E ( X ) = ( a + b ) ˜ x + ( 1 − ( a + b )) inf X − bE P 0 (( ˜ x − X ) + ) . ● If a = 0, E ( X ) = ( 1 − b ) inf X + bE P 0 ( X ) . Remark : ● a < 0 and a + b = 1 → PMM ( Walley, 1991 ) ● a = 0 → ε -contamination Model ( Walley, 1991 )

  23. Horizontal Barrier Model (HBM) Parameters b + 2 a ≤ 1 , a + b > 1 , c = 1 − ( a + b )( < 0 )

  24. Horizontal Barrier Model (HBM) Parameters b + 2 a ≤ 1 , a + b > 1 , c = 1 − ( a + b )( < 0 ) P ( A ) = max { min { bP 0 ( A ) + c , 1 } , 0 } . A HBM is generally only 2-coherent. It may avoid sure loss or be even coherent.

  25. A selection of results for HBMs ● If P is finite, P in a HBM avoids sure loss iff ∑ P ( ω ) ≥ 1. ω ∈ P Then its natural extension on A( P ) is E ( A ) = min { ∑ P ( ω ) , 1 } . ω ∈ P ● E is also the natural extension of the probability interval [ 0 , P ( ω )] ω ∈ P → E is 2-monotone � � → a HBM and a lower-vacuous probability interval avoiding sure loss are equivalent ( Troffaes, de Cooman, 2014 ). ● If P is arbitrary, P avoids sure loss iff, for any finite partition P ′ coarser than P , ∑ ω ′ ∈ P ′ P ( ω ′ ) ≥ 1 .

  26. Further (Poster Session) results ● Natural extensions of coherent HBMs ● Natural extensions of RRMs avoiding sure loss (further relationships with probability intervals) ● Interpretation of a VBM natural extension as a risk measure

  27. Thank you... ...and see you at the Poster Session!

Recommend


More recommend