aviation infrastructure economics aviation infrastructure
play

Aviation Infrastructure Economics Aviation Infrastructure Economics - PowerPoint PPT Presentation

Aviation Short Course Aviation Short Course Aviation Infrastructure Economics Aviation Infrastructure Economics October 14- -15, 2004 15, 2004 October 14 The Aerospace Center Building The Aerospace Center Building 901 D St. SW, Suite 850


  1. Aviation Short Course Aviation Short Course Aviation Infrastructure Economics Aviation Infrastructure Economics October 14- -15, 2004 15, 2004 October 14 The Aerospace Center Building The Aerospace Center Building 901 D St. SW, Suite 850 Washington, DC 20024 Lecture BWI/Andrews Conference Rooms Instructor: Jasenka Rakas University of California, Berkeley 1

  2. Aviation Short Course Introduction to Optimization Techniques for Infrastructure Management Application – Markov Decision Processes for Infrastructure Management, Maintenance and Rehabilitation October 15, 2004 October 15, 2004 Instructor: Jasenka Rakas University of California, Berkeley 2

  3. Background Relevant NAS Measures of Performance and their Relations 3

  4. Background Cost Center Description: Can the airspace users have extra benefits from our Staffing Sparing Probability distributions for equipment MTBF maintenance actions? Type of failure Scheduled or unscheduled Travel Time Shift Policies Administrative Time Technician Qualifications Service Description: Equipment making up a service Redundancy Module Output Measures: Service Output measure: Availability Technician Utilization Module Availabiltiy Outage Durations Cost Center Airport Characteristics: Aircraft mix Aircraft class Output Measures: Speed % weather (VFR and IFR) Capacity Airport Model Final Approach Path Geometry Aircraft delay Holding Pattern Runway utilization Number of runways Final approach path statistics Aircraft arrival demand Aircraft queue statistics Sequencing rule Mile-in-trail separation matrices runway ocupancy time 4

  5. Models for The National Airspace System Infrastructure Performance and Investment Analysis October 15, 2004 Jasenka Rakas University of California at Berkeley 5 5

  6. Constrained Optimization for Steady State Maintenance, Repair & Rehabilitation (MR&R) Policy The objective is to apply constrained optimization model to solve an optimal steady state NAS infrastructure management problem, focusing on Terminal Airspace/Runway navigational equipment. Markov Decision Process is reduced to a linear programming formulation to determine the optimum policy. 6

  7. Literature Review Review of Special Types of Linear Programming problems: • transportation problem • transshipment problem • assignment problem Review of Dynamic Programming (a mathematical technique often useful for making a sequence of interrelated decisions): • deterministic • probabilistic 7

  8. Literature Review Review of Inventory Theory: • components • deterministic models • stochastic models Review of Markov Decision Processes: • Markov decision models • linear programming and optimal policies • policy-improvement algorithms for finding optimal policies 8

  9. Methodology Markov Decision Processes Decision Cost Expected cost Maintenance Total due to caused Cost Cost traffic delays State C m C d C t = (probability) C d + C m 0 = good as new $ 0 $ 0 $ 0 1. Leave ASR 1 = operable – minor deterioration $ 1 000,000 (for example) $ 0 $ 1 000,000 2 = operable – major deterioration $ 6 000,000 $ 0 $ 6 000,000 as it is 3 = inoperable $ 20,000,000 $ 0 $ 20,000,000 0 = good as new If scheduled, $0; otherwise $X2 If scheduled $A2, otherwise $B2 2. Maintenance C d + C m 1 = operable – minor deterioration If scheduled, $0; otherwise $Y2 If scheduled $C2, otherwise $D2 2 = operable – major deterioration If scheduled, $0; otherwise $Z1 If scheduled $E2, otherwise $F2 3 = inoperable If scheduled, $M2; otherwise $N2 If scheduled $G2, otherwise $ H2 0 = good as new If scheduled, $0; otherwise $X3 If scheduled $A3, otherwise $B3 3. Replace C d + C m 1 = operable – minor deterioration If scheduled, $0; otherwise $Y3 If scheduled $C3, otherwise $D3 2 = operable – major deterioration If scheduled, $0; otherwise $Z3 If scheduled $E3, otherwise $F3 3 = inoperable If scheduled, $M3; otherwise $N3 If scheduled $G3, otherwise $ H3 0 = good as new If scheduled, $0; otherwise $X4 If scheduled $A4, otherwise $B4 4. Upgrade C d + C m 1 = operable – minor deterioration If scheduled, $0; otherwise $Y4 If scheduled $C4, otherwise $D4 2 = operable – major deterioration If scheduled, $0; otherwise $Z4 If scheduled $E4, otherwise $F4 3 = inoperable If scheduled, $M4; otherwise $N4 If scheduled $G4, otherwise $ H4 9

  10. Methodology Markov Decision Processes Markov Decision Processes studies sequential optimization of discrete time random systems. The basic object is a discrete-time random system whose transition mechanism can be controlled over time. Each control policy defines the random process and values of objective functions associated with this process. The goal is to select a “good’ control policy. 10

  11. Methodology Markov Decision Processes Interrupt Condition Entry Type Code Cause 60 Scheduled Periodic Maintenance FL Full outage LIR Log Interrupt condition 61 Scheduled Commercial Lines RS Reduced Service LCM Log Corrective 62 Scheduled Improvements RE Like Reduced Service Maintenance 63 Scheduled Flight Inspection 64 Scheduled Administrative but no longer used LPM Log Preventative 65 Scheduled Corrective Maintenance Maintenance 66 Scheduled Periodic Software Maintenance 67 Scheduled Corrective Software Maintenance LEM Log Equipment Upgrade 68 Scheduled Related Outage Logs 69 Scheduled Other 80 Unscheduled Periodic Maintenance 81 Unscheduled Commercial Lines 82 Unscheduled Prime Power 83 Unscheduled Standby Power 84 Unscheduled Interface Condition 85 Unscheduled Weather Effects 86 Unscheduled Software 87 Unscheduled Unknown 88 Unscheduled Related Outage 89 Unscheduled Other 11

  12. Markov Decision Process Linear Programming and Optimal Policies General Formulation C Expected cost incurred during next ik transition if system is in state i and decision k is made Steady state unconditional probability that y ik the system is in state i AND decision k is made = P {state = i and decision = k } y ik 12

  13. Markov Decision Process Linear Programming and Optimal Policies General Formulation M K �� OF Min C ik y ik 0 1 i = k = subject to the constraints M K �� (1) 1 y = ik i = 0 k = 1 K M K � �� ( ) 0 y − y p k = (2) , for j = 0,1,… M jk ik ij 1 0 1 k = i = k = (3) , i = 0,1,….M; k = 1,2,……, K 0 y ≥ ik 13

  14. Conditional probability that the decision k is made, given the system is in state i : { } | D ik = P decision = k state = i , desision k � � ... D D D 01 02 0 k � � ... D D D � � 11 12 1 k , state i � � : : : : � � � � ... D D D 1 2 M M MK 14

  15. Markov Decision Process Linear Programming and Optimal Policies Assumptions • network-level problem non-homogeneous network (contribution) Dynamic Programming (DP) used for single facility problems Linear Programming (LP) used for network-level problems 15

  16. Markov Decision Process Linear Programming and Optimal Policies Assumptions • deterioration process - constant over the planning horizon • inspections - reveal true condition - performed at the beginning of every year for all facilities 16

  17. Specific Problem Markov Decision Process Linear Programming and Optimal Policies Transition Probability Matrix P( k | i , a ) is an element in the matrix which gives the probability of equipment j being in state k in the next year, given that it is in the state i in the current year when action a is taken. 17

  18. Specific Problem Data: Note: i is a condition j is an equipment a is an action The cost C iaj of equipment j in condition i when action a is employed. The user cost U is calculated from the overall condition of the airport. Budget j The budget for equipment j 18

  19. Specific Problem Decision Variable: W Fraction of equipment j in condition i when iaj action a is taken. Note that some types of equipment have only one or two items per type of equipment. Therefore, some W iaj are equal to 1. 19

  20. Specific Problem Objective Function: Minimize the total cost per year (long term): ��� [ ] pax-cost )) Minimize ( , , ) ( ( , , C i a j W U f A × + η iaj i a j 20

  21. Specific Problem fraction of equipment j in condition i W iaj when action a is taken. Constraint (1): mass conservation constraint In order to make sure that the mass conservation holds, the sum of all fractions has to be 1. �� 1 W j = ∀ iaj i a 21

  22. Specific Problem C iaj : Cost of equipment j in condition i when action a is employed. U cost: airport service availability A η passenger load (per aircraft) pax-cost ��� [ ] pax-cost )) ( , , ) ( ( , , C i a j × W + U f A η iaj i a j 22

  23. Specific Problem Constraint (2): All fractions are greater than 0 0 , W ia ≥ ∀ a ∀ i Constraint (3): Steady-state constraint is added to verify that the Chapman-Kolmogorov equation holds. �� � = * ( | , ) W ∀ j W P k i a kaj iaj j a i a 23

  24. Specific Problem Constraint (4): This constraint is added to make sure that there will be less than 0.1 in the worst state. � 0 . 1 W 3 < aj a Constraint (5): This constraint is added to make sure that there will be more than 0.3 in the best state. � 0 . 3 W 1 > aj a 24

Recommend


More recommend