Introduction to the theory of imprecise probability Erik Quaeghebeur TU Delft, the Netherlands UTOPIAE Training School 2018, Durham, England
Why would you want your probability to be imprecise? 2
versus 3
Uncertainty about outcome of. . . Agents (Gamblers) Wiske versus Yoko Tsuno 4
Assessment (gambles accepted) Agents (Gamblers) 1 if ⊗ 5 if Wiske ⊗ 4 if + 1 if Yoko Tsuno 5
Natural extension ( λ, µ ⊙ 0 ) Rational agents ⊗ 5 λ λ Wiske + µ + µ ⊗ 4 λ + λ Yoko Tsuno + µ + µ 6
Natural extension ( λ, µ ⊙ 0 ) Rational agents ⊗ 5 λ λ Wiske + µ + µ COHERENCE ⊗ 4 λ + λ Yoko Tsuno + µ + µ 7
Assessment (gambles accepted) Agents (Gamblers) ⊗ 5 , Heroine pool ⊗ 4 + 8
Irrational agents Natural extension ( λ, µ ⊙ 0 ) ( λ W ⊗ 4 λ Y + µ ) Heroine pool + ( ⊗ 5 λ W + λ Y + µ ) 9
Irrational agents Natural extension ( λ, µ ⊙ 0 ) ( λ W ⊗ 4 λ Y + µ ) Heroine pool + ( ⊗ 5 λ W + λ Y + µ ) SURE LOSS! 10
Assessment (gambles accepted) Agents (Gamblers) ∅ Heroine pool 11
Natural extension ( λ, µ ⊙ 0 ) Rational agents + µ µ Heroine pool 12
Natural extension ( λ, µ ⊙ 0 ) Rational agents + µ µ Heroine pool VACUOUS 13
Basic concepts ◮ Agent reasoning about experiment with uncertain outcome ◮ Possibility space 𝒴 of outcomes ◮ Gambles are real-valued functions of the outcomes; ℒ = 𝒴 ⊃ R ( ℒ is assumed to be a linear space) ◮ Assessment is a description of a set of acceptable gambles ◮ Natural extension of an assessment is the set of all acceptable gambles implied by the agent’s rationality criteria (and other assumptions) 14
Coherence, the classical rationality criteria Constructive Positive scaling If 𝑔 is acceptable and λ > 0 , then λ𝑔 is acceptable. Addition If 𝑔 and are acceptable, then 𝑔 + is acceptable. Background Accepting gain If 𝑔 is nonnegative for all outcomes, then 𝑔 is acceptable. Avoiding sure loss If is negative for all outcomes, then is not acceptable. 15
Where are the imprecise probabilities I came here for!!! 16
Previsions/Expectations are prices for gambles ◮ ‘ Prevision ’ and ‘ Expectation ’ are synomyms ◮ Prices are real values interpreted as constant gambles ◮ Lower prevision 𝑄 ( 𝑔 ) is the supremum acceptable buying price of 𝑔 : 𝑄 ( 𝑔 ) = sup ¶ ν ∈ R : 𝑔 ⊗ ν is acceptable ♢ Upper prevision 𝑄 ( 𝑔 ) is the infimum acceptable selling price of 𝑔 ◮ Conjugacy of coherent lower and upper previsions: 𝑄 ( 𝑔 ) = ⊗ 𝑄 ( ⊗ 𝑔 ) ◮ If 𝑄 ( 𝑔 ) = 𝑄 ( 𝑔 ) , then 𝑄 ( 𝑔 ) = 𝑄 ( 𝑔 ) is the prevision of 𝑔 17
Probabilities are previsions of indicator gambles ◮ Event 𝐵 is a subset of 𝒴 ◮ Indicator gamble ∮︂ 1 , 𝑦 ∈ 𝐵, 1 A ( 𝑦 ) = 0 , 𝑦 ̸∈ 𝐵 ◮ Lower probability 𝑄 ( 𝐵 ) = 𝑄 (1 A ) Upper probability 𝑄 ( 𝐵 ) = 𝑄 (1 A ) ◮ Conjugacy of coherent lower and upper probabilities ( 𝐵 c = 𝒴 \ 𝐵 ): 𝑄 ( 𝐵 ) = 1 ⊗ 𝑄 ( 𝐵 c ) ◮ If 𝑄 ( 𝐵 ) = 𝑄 ( 𝐵 ) , then 𝑄 ( 𝐵 ) = 𝑄 ( 𝐵 ) is the probability of 𝐵 18
Natural extension ( λ, µ ⊙ 0 ) Agents ( λ + µ ) Wiske +( ⊗ 5 λ + µ ) ( ⊗ 4 λ + µ ) Yoko Tsuno ( λ + µ ) ( λ W ⊗ 4 λ Y + µ ) Irrational pool + ( ⊗ 5 λ W + λ Y + µ ) Rational pool µ + µ 19
Wiske’s lower probability that Belgium will win ∮︂ ⎟ ⟨ ⎟ ⟨ ⨀︁ 1 ⊗ ν λ + µ , µ ⊙ 0 , λ ⊙ 0 𝑄 ( ) = sup ν : = ⊗ ν ⊗ 5 λ + µ 20
Wiske’s lower probability that Belgium will win ∮︂ ⎟ ⟨ ⎟ ⟨ ⨀︁ 1 ⊗ ν λ + µ , µ ⊙ 0 , λ ⊙ 0 𝑄 ( ) = sup ν : = ⊗ ν ⊗ 5 λ + µ {︁ 5 λ + µ ⟨ = sup : 1 ⊗ 5 λ + µ = λ + µ , µ ⊙ 0 , λ ⊙ 0 21
Wiske’s lower probability that Belgium will win ∮︂ ⎟ ⟨ ⎟ ⟨ ⨀︁ 1 ⊗ ν λ + µ , µ ⊙ 0 , λ ⊙ 0 𝑄 ( ) = sup ν : = ⊗ ν ⊗ 5 λ + µ {︁ 5 λ + µ ⟨ = sup : 1 ⊗ 5 λ + µ = λ + µ , µ ⊙ 0 , λ ⊙ 0 : λ = 1 ⎭ ⎨ ⊗ µ ) , µ ⊙ 0 , λ ⊙ 0 = sup 5 λ + µ 6(1 + µ 22
Wiske’s lower probability that Belgium will win ∮︂ ⎟ ⟨ ⎟ ⟨ ⨀︁ 1 ⊗ ν λ + µ , µ ⊙ 0 , λ ⊙ 0 𝑄 ( ) = sup ν : = ⊗ ν ⊗ 5 λ + µ {︁ 5 λ + µ ⟨ = sup : 1 ⊗ 5 λ + µ = λ + µ , µ ⊙ 0 , λ ⊙ 0 : λ = 1 ⎭ ⎨ ⊗ µ ) , µ ⊙ 0 , λ ⊙ 0 = sup 5 λ + µ 6(1 + µ ⎭ 5 6 ⊗ 1 ⊗ 5 ⎨ = sup 6 µ 6 µ : µ ⊙ 0 = 5 6 23
Agents P ( ) , P ( ) P ( ) , P ( ) 5 0 , 1 Wiske 6 , 1 6 0 , 1 4 5 , 1 Yoko Tsuno 5 + ∞ , ⊗∞ + ∞ , ⊗∞ Irrational pool 0 , 1 0 , 1 Rational pool 24
Assessments of lower previsions ◮ Assume a lower prevision 𝑄 with values assessed for a set of gambles ◮ How can we apply the theory we have seen? ◮ Translate the lower prevision 𝑄 ( 𝑔 ) for a gamble 𝑔 ∈ into a set ¶ 𝑔 ⊗ 𝑄 ( 𝑔 ) + 𝜁 : 𝜁 > 0 ♢ of acceptable gambles ◮ The gambles 𝑔 ⊗ 𝑄 ( 𝑔 ) are called marginal gambles 25
Expressions for assessments of lower previsions Avoiding sure loss n ∑︂ sup ( 𝑔 k ( 𝑦 ) ⊗ 𝑄 ( 𝑔 k )) ⊙ 0 for all 𝑜 ⊙ 0 and 𝑔 k ∈ x ∈𝒴 k =1 26
Expressions for assessments of lower previsions Avoiding sure loss n ∑︂ sup ( 𝑔 k ( 𝑦 ) ⊗ 𝑄 ( 𝑔 k )) ⊙ 0 for all 𝑜 ⊙ 0 and 𝑔 k ∈ x ∈𝒴 k =1 Coherence ⎠ n ⎜ ∑︂ sup ( 𝑔 k ( 𝑦 ) ⊗ 𝑄 ( 𝑔 k )) ⊗ 𝑛 ( 𝑔 0 ⊗ 𝑄 ( 𝑔 0 )) ⊙ 0 x ∈𝒴 k =1 for all 𝑜, 𝑛 ⊙ 0 and 𝑔 k ∈ 27
Expressions for assessments of lower previsions Avoiding sure loss n ∑︂ sup ( 𝑔 k ( 𝑦 ) ⊗ 𝑄 ( 𝑔 k )) ⊙ 0 for all 𝑜 ⊙ 0 and 𝑔 k ∈ x ∈𝒴 k =1 Coherence ⎠ n ⎜ ∑︂ sup ( 𝑔 k ( 𝑦 ) ⊗ 𝑄 ( 𝑔 k )) ⊗ 𝑛 ( 𝑔 0 ⊗ 𝑄 ( 𝑔 0 )) ⊙ 0 x ∈𝒴 k =1 for all 𝑜, 𝑛 ⊙ 0 and 𝑔 k ∈ Natural extension n ∮︂ ⨀︁ ⎭ )︁⎨ ∑︂ 𝐹 ( 𝑔 ) = sup inf 𝑔 ( 𝑦 ) ⊗ (︁ 𝑔 k ( 𝑦 ) ⊗ 𝑄 ( 𝑔 k ) : 𝑜 ⊙ 0 , 𝑔 k ∈ , λ k > 0 λ k x ∈𝒴 k =1 28
Does it really have to be so involved? 29
Lower previsions on linear spaces If the lower prevision 𝑄 is defined for all gambles in a linear space ℒ , the coherence criteria simplify: Accepting sure gains 𝑄 ( 𝑔 ) ⊙ inf 𝑔 for all 𝑔 ∈ ℒ Super-linearity 𝑄 ( 𝑔 + ) ⊙ 𝑄 ( 𝑔 ) + 𝑄 ( ) for all 𝑔, ∈ ℒ Positive homogeneity 𝑄 ( λ𝑔 ) = λ𝑄 ( 𝑔 ) for all 𝑔 ∈ ℒ and λ > 0 30
Upper previsions on linear spaces If the upper prevision 𝑄 is defined for all gambles in a linear space ℒ , the coherence criteria simplify: Accepting sure gains 𝑄 ( 𝑔 ) ⊘ sup 𝑔 for all 𝑔 ∈ ℒ Sub-linearity 𝑄 ( 𝑔 + ) ⊘ 𝑄 ( 𝑔 ) + 𝑄 ( ) for all 𝑔, ∈ ℒ Positive homogeneity 𝑄 ( λ𝑔 ) = λ𝑄 ( 𝑔 ) for all 𝑔 ∈ ℒ and λ > 0 31
Coherent lower & upper previsions For a coherent lower prevision 𝑄 and its conjugate upper prevision 𝑄 many useful properties can be derived; we present a few: Upper dominates lower 𝑄 ( 𝑔 ) ⊙ 𝑄 ( 𝑔 ) for all 𝑔 ∈ ℒ Constants 𝑄 ( µ ) = µ for all µ ∈ R Constant additivity 𝑄 ( 𝑔 + µ ) = 𝑄 ( 𝑔 ) + µ for all 𝑔 ∈ ℒ and µ ∈ R Gamble dominance if 𝑔 ⊙ + µ then 𝑄 ( 𝑔 ) ⊙ 𝑄 ( ) + µ for all 𝑔, ∈ ℒ and µ ∈ R Mixed sub/super-additivity 𝑄 ( 𝑔 + ) ⊘ 𝑄𝑔 + 𝑄 ( ) ⊘ 𝑄 ( 𝑔 + ) for all 𝑔, ∈ ℒ 32
I heard that imprecise probabilities are just sets of probabilities ? 33
The probability simplex 𝑞 = ( 𝑞 ) , 𝑞 34
The probability simplex ( 1 2 , 1 2 ) (1 , 0) (0 , 1) 35
The probability simplex 𝑄 p ( ) = 0 ≤ 𝑞 + 1 ≤ 𝑞 1 0 36
The probability simplex 𝑄 p ( ) = 0 ≤ 𝑞 + 1 ≤ 𝑞 1 ) = 4 𝑄 Y ( 5 0 ℳ Y (0 , 4 5 ) 37
The probability simplex 𝑄 p ( ) = 0 ≤ 𝑞 + 1 ≤ 𝑞 1 ) = 4 𝑄 Y ( 5 0 ℳ Y (0 , 4 5 ) CREDAL SET 38
From lower previsions to credal sets ◮ The prevision of a gamble is a linear function over the probability simplex ◮ The lower prevision of a gamble can be seen as bounding the prevision of that gamble so constraining the possible probability mass functions ◮ A lower prevision corresponds to a set of constraints, defining a credal set (closed convex set) ℳ = ¶ 𝑞 : 𝑄 p ( 𝑔 ) ⊙ 𝑄 ( 𝑔 ) for all 𝑔 ∈ ♢ ◮ All this generalizes to infinite 𝒴 39
A larger probability simplex P(enalties) 40
A larger probability simplex P(enalties) 𝑄 ( ) = 𝑄 ( P ) 𝑄 ( ) = 𝑄 ( P ) 41
A larger probability simplex P(enalties) 𝑄 ( ⊗ P ) = 0 𝑄 ( ⊗ P ) = 0 ℳ Pool 42
A larger probability simplex P(enalties) ( 1 3 , 1 3 , 1 3 ) ℳ Pool (1 , 0 , 0) (0 , 1 , 0) 43
A larger probability simplex P(enalties) ( 1 3 , 1 3 , 1 3 ) 𝑄 ( P ) = 1 3 ℳ Pool (1 , 0 , 0) (0 , 1 , 0) 44
Recommend
More recommend