rational decisions
play

Rational decisions Chapter 16 Chapter 16 1 Outline Rational - PowerPoint PPT Presentation

Rational decisions Chapter 16 Chapter 16 1 Outline Rational preferences Utilities Money Multiattribute utilities Decision networks Value of information Chapter 16 2 Preferences An agent chooses among prizes ( A , B ,


  1. Rational decisions Chapter 16 Chapter 16 1

  2. Outline ♦ Rational preferences ♦ Utilities ♦ Money ♦ Multiattribute utilities ♦ Decision networks ♦ Value of information Chapter 16 2

  3. Preferences An agent chooses among prizes ( A , B , etc.) and lotteries, i.e., situations with uncertain prizes A p L 1−p Lottery L = [ p, A ; (1 − p ) , B ] B Notation: A ≻ B A preferred to B A ∼ B indifference between A and B A ≻ ∼ B B not preferred to A Chapter 16 3

  4. Rational preferences Idea: preferences of a rational agent must obey constraints. ⇒ Rational preferences behavior describable as maximization of expected utility Constraints: Orderability ( A ≻ B ) ∨ ( B ≻ A ) ∨ ( A ∼ B ) Transitivity ( A ≻ B ) ∧ ( B ≻ C ) ⇒ ( A ≻ C ) Continuity A ≻ B ≻ C ⇒ ∃ p [ p, A ; 1 − p, C ] ∼ B Substitutability A ∼ B ⇒ [ p, A ; 1 − p, C ] ∼ [ p, B ; 1 − p, C ] Monotonicity A ≻ B ⇒ ( p ≥ q ⇔ [ p, A ; 1 − p, B ] ≻ ∼ [ q, A ; 1 − q, B ]) Chapter 16 4

  5. Rational preferences contd. Violating the constraints leads to self-evident irrationality For example: an agent with intransitive preferences can be induced to give away all its money A If B ≻ C , then an agent who has C 1c would pay (say) 1 cent to get B 1c If A ≻ B , then an agent who has B would pay (say) 1 cent to get A B C If C ≻ A , then an agent who has A 1c would pay (say) 1 cent to get C Chapter 16 5

  6. Maximizing expected utility Theorem (Ramsey, 1931; von Neumann and Morgenstern, 1944): Given preferences satisfying the constraints there exists a real-valued function U such that A ≻ U ( A ) ≥ U ( B ) ⇔ ∼ B U ([ p 1 , S 1 ; . . . ; p n , S n ]) = Σ i p i U ( S i ) MEU principle: Choose the action that maximizes expected utility Note: an agent can be entirely rational (consistent with MEU) without ever representing or manipulating utilities and probabilities E.g., a lookup table for perfect tictactoe Chapter 16 6

  7. Utilities Utilities map states to real numbers. Which numbers? Standard approach to assessment of human utilities: compare a given state A to a standard lottery L p that has “best possible prize” u ⊤ with probability p “worst possible catastrophe” u ⊥ with probability (1 − p ) adjust lottery probability p until A ∼ L p continue as before 0.999999 pay $30 ~ L 0.000001 instant death Chapter 16 7

  8. Utility scales Normalized utilities: u ⊤ = 1 . 0 , u ⊥ = 0 . 0 Micromorts: one-millionth chance of death useful for Russian roulette, paying to reduce product risks, etc. QALYs: quality-adjusted life years useful for medical decisions involving substantial risk Note: behavior is invariant w.r.t. +ve linear transformation U ′ ( x ) = k 1 U ( x ) + k 2 where k 1 > 0 With deterministic prizes only (no lottery choices), only ordinal utility can be determined, i.e., total order on prizes Chapter 16 8

  9. Money Money does not behave as a utility function Given a lottery L with expected monetary value EMV ( L ) , usually U ( L ) < U ( EMV ( L )) , i.e., people are risk-averse Utility curve: for what probability p am I indifferent between a prize x and a lottery [ p, $ M ; (1 − p ) , $0] for large M ? Typical empirical data, extrapolated with risk-prone behavior: +U o o o o o o o o o o o o +$ −150,000 800,000 o o o Chapter 16 9

  10. Student group utility For each x , adjust p until half the class votes for lottery (M=10,000) p 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 $x 0 500 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 Chapter 16 10

  11. Decision networks Add action nodes and utility nodes to belief networks to enable rational decision making Airport Site Air Traffic Deaths Litigation Noise U Construction Cost Algorithm: For each value of action node compute expected value of utility node given action, evidence Return MEU action Chapter 16 11

  12. Multiattribute utility How can we handle utility functions of many variables X 1 . . . X n ? E.g., what is U ( Deaths, Noise, Cost ) ? How can complex utility functions be assessed from preference behaviour? Idea 1: identify conditions under which decisions can be made without com- plete identification of U ( x 1 , . . . , x n ) Idea 2: identify various types of independence in preferences and derive consequent canonical forms for U ( x 1 , . . . , x n ) Chapter 16 12

  13. Strict dominance Typically define attributes such that U is monotonic in each Strict dominance: choice B strictly dominates choice A iff ∀ i X i ( B ) ≥ X i ( A ) (and hence U ( B ) ≥ U ( A ) ) This region X X 2 2 dominates A B C B C A A D X X 1 1 Deterministic attributes Uncertain attributes Strict dominance seldom holds in practice Chapter 16 13

  14. Stochastic dominance 1.2 1 1 0.8 0.8 Probability Probability 0.6 0.6 S1 S2 S1 0.4 S2 0.4 0.2 0.2 0 0 -6 -5.5 -5 -4.5 -4 -3.5 -3 -2.5 -2 -6 -5.5 -5 -4.5 -4 -3.5 -3 -2.5 -2 Negative cost Negative cost Distribution p 1 stochastically dominates distribution p 2 iff � t � t ∀ t −∞ p 1 ( x ) dx ≤ −∞ p 2 ( t ) dt If U is monotonic in x , then A 1 with outcome distribution p 1 stochastically dominates A 2 with outcome distribution p 2 : � ∞ � ∞ −∞ p 1 ( x ) U ( x ) dx ≥ −∞ p 2 ( x ) U ( x ) dx Multiattribute case: stochastic dominance on all attributes ⇒ optimal Chapter 16 14

  15. Stochastic dominance contd. Stochastic dominance can often be determined without exact distributions using qualitative reasoning E.g., construction cost increases with distance from city S 1 is closer to the city than S 2 ⇒ S 1 stochastically dominates S 2 on cost E.g., injury increases with collision speed Can annotate belief networks with stochastic dominance information: + X → Y ( X positively influences Y ) means that − For every value z of Y ’s other parents Z ∀ x 1 , x 2 x 1 ≥ x 2 ⇒ P ( Y | x 1 , z ) stochastically dominates P ( Y | x 2 , z ) Chapter 16 15

  16. Label the arcs + or – SocioEcon Age GoodStudent ExtraCar Mileage RiskAversion VehicleYear SeniorTrain DrivingSkill MakeModel DrivingHist Antilock DrivQuality AntiTheft HomeBase CarValue Airbag Accident Ruggedness Theft OwnDamage Cushioning OwnCost OtherCost MedicalCost LiabilityCost PropertyCost Chapter 16 16

  17. Label the arcs + or – SocioEcon Age GoodStudent ExtraCar Mileage RiskAversion VehicleYear SeniorTrain + DrivingSkill MakeModel DrivingHist Antilock DrivQuality AntiTheft HomeBase CarValue Airbag Accident Ruggedness Theft OwnDamage Cushioning OwnCost OtherCost MedicalCost LiabilityCost PropertyCost Chapter 16 17

  18. Label the arcs + or – SocioEcon Age GoodStudent ExtraCar Mileage RiskAversion VehicleYear + SeniorTrain + DrivingSkill MakeModel DrivingHist Antilock DrivQuality AntiTheft HomeBase CarValue Airbag Accident Ruggedness Theft OwnDamage Cushioning OwnCost OtherCost MedicalCost LiabilityCost PropertyCost Chapter 16 18

  19. Label the arcs + or – SocioEcon Age GoodStudent ExtraCar Mileage RiskAversion VehicleYear + SeniorTrain + DrivingSkill MakeModel DrivingHist Antilock DrivQuality AntiTheft HomeBase CarValue Airbag − Accident Ruggedness Theft OwnDamage Cushioning OwnCost OtherCost MedicalCost LiabilityCost PropertyCost Chapter 16 19

  20. Label the arcs + or – SocioEcon Age GoodStudent ExtraCar Mileage RiskAversion VehicleYear − + SeniorTrain + DrivingSkill MakeModel DrivingHist Antilock DrivQuality AntiTheft HomeBase CarValue Airbag − Accident Ruggedness Theft OwnDamage Cushioning OwnCost OtherCost MedicalCost LiabilityCost PropertyCost Chapter 16 20

  21. Label the arcs + or – SocioEcon Age GoodStudent ExtraCar Mileage RiskAversion VehicleYear − + SeniorTrain + DrivingSkill MakeModel DrivingHist Antilock DrivQuality AntiTheft HomeBase CarValue Airbag − Accident Ruggedness Theft OwnDamage Cushioning OwnCost OtherCost MedicalCost LiabilityCost PropertyCost Chapter 16 21

  22. Preference structure: Deterministic X 1 and X 2 preferentially independent of X 3 iff preference between � x 1 , x 2 , x 3 � and � x ′ 1 , x ′ 2 , x 3 � does not depend on x 3 E.g., � Noise, Cost, Safety � : � 20,000 suffer, $4.6 billion, 0.06 deaths/mpm � vs. � 70,000 suffer, $4.2 billion, 0.06 deaths/mpm � Theorem (Leontief, 1947): if every pair of attributes is P.I. of its com- plement, then every subset of attributes is P.I of its complement: mutual P.I.. Theorem (Debreu, 1960): mutual P.I. ⇒ ∃ additive value function: V ( S ) = Σ i V i ( X i ( S )) Hence assess n single-attribute functions; often a good approximation Chapter 16 22

  23. Preference structure: Stochastic Need to consider preferences over lotteries: X is utility-independent of Y iff preferences over lotteries in X do not depend on y Mutual U.I.: each subset is U.I of its complement ⇒ ∃ multiplicative utility function: U = k 1 U 1 + k 2 U 2 + k 3 U 3 + k 1 k 2 U 1 U 2 + k 2 k 3 U 2 U 3 + k 3 k 1 U 3 U 1 + k 1 k 2 k 3 U 1 U 2 U 3 Routine procedures and software packages for generating preference tests to identify various canonical families of utility functions Chapter 16 23

Recommend


More recommend