Solving stochastic dynamic programming models without transition matrices Paul L. Fackler Department of Agricultural & Applied Economics and Department of Applied Ecology North Carolina State University Computational Sustainability Seminar Nov. 3, 2017 1 Solving stochastic dynamic programming models without transition matrices Paul L. Fackler, NCSU
Outline Brief review of dynamic programming curses of dimensionality index vectors DP algorithms Expected Value (EV) functions Staged models Models with deterministic post-action states Factored Models Factored models & conditional independence Evaluation of EV functions Results for two spatial models: dynamic reserve site selection control of an invasive species on a spatial network Models with transition functions and random noise Wrap-up 2 Solving stochastic dynamic programming models without transition matrices Paul L. Fackler, NCSU
Dynamic Programming Problems Given state values π , action values π΅ , reward function π(π, π΅) , state transition probability matrix π(π + |π, π΅) and discount factor π , solve β π(π) = max π π’ πΉ π’ [π(π π’ , π΅(π π’ ))] β π΅(π) π’=0 Equivalently solve Bellmanβs equation: π(π + |π, π΅(π))π(π + ) π(π) = max π(π, π΅(π)) + π β π + π΅(π) Find the strategy π΅(π) that maximizes: the current reward R plus the discount factor π times π(π + |π, π΅)π(π + ) the expected future value β π + 3 Solving stochastic dynamic programming models without transition matrices Paul L. Fackler, NCSU
Curses of dimensionality Problem size grows exponentially with increases in the number of variables Powell discusses 3 curses: growth in the state space growth in the action space growth in the outcome space In discrete models we represent the size of the state space as π π‘ the size of the state/action space as π π¦ The state transition probability matrix is π π‘ Γ π π¦ Focus here on problems for which vectors of size π π¦ can be stored and manipulated but matrices of size π π‘ Γ π π¦ are problematic Thus the focus in on moderately sized problems By having techniques to solve moderately sized problems we can gain insight into the quality of heuristic or approximate methods that must be used for large problems 4 Solving stochastic dynamic programming models without transition matrices Paul L. Fackler, NCSU
Index Vectors Vectors composed of positive integers Used for: extraction expansion shuffling Let: 1 0 0 1 0 1 1 1 0 1 0 1 1 1 1 1 2 0 0 2 0 2 0 1 π΅ = πΆ = 2 1 2 1 0 3 0 2 1 1 [ 1] 3 3 0 0 3 0 1 3 1 0 [ 1] 3 1 π½ = [5 8] extracts the rows of πΆ with the first column equal to 2: πΆ(π½, 1) = 2 6 7 π½ = [1 6] expands π΅ so π΅(π½, : ) = πΆ(: , [1 2]) 1 2 2 3 3 4 4 5 5 6 π½ = [1 6] expands π΅ so π΅(π½, : ) = πΆ(: , [1 3]) 2 1 2 3 4 3 4 5 6 5 5 Solving stochastic dynamic programming models without transition matrices Paul L. Fackler, NCSU
Dynamic Programming with Index Vectors Consider a DP model with 2 state variables each binary and 3 possible actions π lists all possible states and matrix π lists all possible state/action combinations: 1 0 0 1 0 1 1 1 0 1 1 1 0 0 2 0 0 0 1 2 0 1 π = [ ] π = 2 1 0 1 0 1 1 2 1 1 3 0 0 3 0 1 3 1 0 [ 1] 3 1 Column 1 of π is the action and columns 2 and 3 are the 2 states The expansion index vector that gives the states in each row of π is π½ π¦ = [1 4] 2 3 4 1 2 3 4 1 2 3 This expands π so π(π½ π¦ , : ) = π(: , [2 3]) 6 Solving stochastic dynamic programming models without transition matrices Paul L. Fackler, NCSU
Strategies as Index Vectors A strategy can be specified as an extraction index vector with the π th element associated with state π : π½ π = [1 6 7 12 ] yields: 1 0 0 2 0 1 π(π½ π , : ) = [ ] 2 1 0 3 1 1 i.e., a strategy that associates action 1 with state 1, action 2 with states 2 and 3 and action 3 with state 4 Strategy vectors select a single row of π for each state so π(π½ π , πΎ π‘ ) = π where πΎ π‘ is an index of the columns of π associated with the state variables 7 Solving stochastic dynamic programming models without transition matrices Paul L. Fackler, NCSU
Dynamic Programming Algorithms Typically solved with function iteration or policy iteration Both use a maximization step that, for a given value function vector π , solves: Μ π = π: π½ π¦ (π)=π [π + ππ β€ π] π π max with the associated strategy vector π½ π : π = argmax [π + ππ β€ π] π π½ π π: π½ π¦ (π)=π This is followed by a value function update step Function iteration updates π using: Μ π β π Policy iteration updates π by solving: ππ = (π½ β ππ[: , π½ π ] β€ )π = π[: , π½ π ] When the discount factor π < 1 the matrix π = π½ β ππ[: , π½ π ] β€ is row-wise diagonally dominant 8 Solving stochastic dynamic programming models without transition matrices Paul L. Fackler, NCSU
Dynamic Programming with Expected Value (EV) functions An EV function π€ transforms the future state vector into its expectation conditional on current states and actions ( π ): π€(π + ) = πΉ[π + |π] An indexed evaluation transforms the future state vector into its expectation condition on the states and actions indexed by π½ π π€(π + ,π½ π ) = πΉ[π + |π[π½ π , : ]] The maximization step uses a full EV evaluation: π: π½ π¦ (π)=π π π + π[π€(π)] π max Value function updates use an indexed evaluation Function iteration: π β π[π½ π ] + ππ€(π, π½ π ) Policy iteration (solve for π ): β(π) = π β ππ€(π, π½ π ) = π[π½ π ] Note that policy iteration with EV functions cannot be solved using direct methods (e.g., LU decomposition) but can be solved efficiently using iterative Krylov methods 9 Solving stochastic dynamic programming models without transition matrices Paul L. Fackler, NCSU
Advantages to using EV functions The EV function π€ can often be evaluated far faster and use far less memory than using the transition matrix π There are at least 3 situations in which EV functions are advantageous: Sparse staged transition matrices Deterministic actions Factored models with conditional independence When the state transition occurs in 2 stages the transition matrix can be written as π = π 2 π 1 where π 1 and π 2 are both sparse but their product is not A deterministic action transforms the current state into a post-decision state Μπ΅ where π΅ has a single 1 in each column The transition matrix can be written as π = π In factored models individual state variables have their own transition matrices that are conditioned on a subset of the current states and actions 10 Solving stochastic dynamic programming models without transition matrices Paul L. Fackler, NCSU
SPOMs with staged transitions Stochastic Patch Occupancy Models (SPOMs): π sites w/ each site either empty or occupied (0/1) Individual site transition matrices for each stage are triangular: πΉ π = [1 1 β π π ] π· π = [1 β π π π π 0 1] 0 π π 2 π possible state values π has 4 π elements and is dense If the transition is decomposed into extinction and colonization phases: π = πΉπ· or π = π·πΉ πΉ and π· are sparse with each have 3 π non-zero elements in these matrices 11 Solving stochastic dynamic programming models without transition matrices Paul L. Fackler, NCSU
Sparsity patterns for extinction and colonization transition matrices For π = 10 πΉ π· 12 Solving stochastic dynamic programming models without transition matrices Paul L. Fackler, NCSU
Typical computational times for SPOM model Time required to do a basic matrix-vector and matrix-matrix multiply π 8 9 10 11 12 13 14 πΉ β€ (π· β€ π€) 0.026 0.065 0.086 0.136 0.292 1.672 4.870 ππ€ 0.014 0.036 0.084 0.801 4.011 15.298 64.277 π = π·πΉ 0.008 0.008 0.046 0.154 0.724 3.499 19.332 0.100 0.075 0.056 0.042 0.032 0.024 0.018 density Rows 1 & 2 display the time required for 1000 evaluations using factored form πΉ β€ (π· β€ π€) and full form π β€ π€ Row 3 shows the setup time required to a form π Row 4 shows the fraction of non-zero elements in πΉ and π· These results are even more dramatic if each site can be classified into more than 2 categories. 13 Solving stochastic dynamic programming models without transition matrices Paul L. Fackler, NCSU
Recommend
More recommend