resource constrained job scheduling
play

Resource Constrained Job Scheduling Dissertantenseminar Matthias - PowerPoint PPT Presentation

Resource Constrained Job Scheduling Dissertantenseminar Matthias Horn 1 unther R. Raidl 1 G 1 Institute of Logic and Computation, TU Wien, Vienna, Austria, { horn | raidl } @ac.tuwien.ac.at Mar 27, 2019 Decision Diagrams (DDs) well known


  1. Resource Constrained Job Scheduling Dissertantenseminar Matthias Horn 1 unther R. Raidl 1 G¨ 1 Institute of Logic and Computation, TU Wien, Vienna, Austria, { horn | raidl } @ac.tuwien.ac.at Mar 27, 2019

  2. Decision Diagrams (DDs) ◮ well known in computer science for decades ◮ logic circuit design, formal verification, . . . ◮ get popular in combinatorial optimization in the last decade ◮ graphical representation of solutions of a combinatorial optimization problem (COP) ◮ weighted directed acyclic multi-graph with one root node r and one target node t ◮ each r - t path corresponds to a solution of the COP ◮ length of a path coincides with the solution’s objective value ◮ state of the art results could be obtained on several problems

  3. Decision Diagrams (DDs) Exact DD r π 0 ∈ { 1 , 2 , 3 } 1 2 3 π 1 . . . π n t Exact DDs ◮ represent precisely the set of feasible solutions of a COP ◮ longest path: corresponds to optimal solution ◮ tend to be exponential in size ⇒ approximate exact DD

  4. Decision Diagrams (DDs) Exact DD Restricted DD r r π 0 π 0 ∈ { 1 , 2 , 3 } 1 2 3 1 2 3 π 1 π 1 . . . . . . π n π n t t Restricted DDs ◮ represent subset of feasible solutions of a COP ◮ by removing nodes and edges ◮ length of longest path: corresponds to a primal bound

  5. Decision Diagrams (DDs) Exact DD Restricted DD Relaxed DD r r r π 0 π 0 π 0 ∈ { 1 , 2 , 3 } 1 2 3 1 2 3 1 2 3 π 1 π 1 π 1 . . . . . . . . . π n π n π n t t t Relaxed DDs ◮ represent superset of feasible solutions of a COP ◮ by merging nodes ◮ length of longest path: corresponds to an upper bound ◮ discrete relaxation of solution space

  6. Relaxed DDs ◮ discrete relaxation of solution space ◮ usage ◮ to obtain dual bounds ◮ as constraint store in constraint propagation ◮ derivation of cuts in mixed integer programming (MIP) ◮ branch-and-bound: branching on merged nodes ◮ . . . ◮ excellent results on e.g. ◮ set covering (Bergman et al., 2011) ◮ independent set (Bergman et al., 2014) ◮ time dependent traveling salesman (Cire and Hoeve, 2013) ◮ time dependent sequential ordering (Kinable et al. 2017)

  7. DDs and Dynamic Programming ◮ dynamic programming (DP) ◮ controls x i , current state s i ◮ transitions: s i +1 = φ i ( s i , x i ) , i = 1 , . . . , n ◮ objective function: f ( x ) = � n i =1 c i ( s i , x i ) ◮ can be solved recursively g i ( x i ) = x i ∈ X i ( s i ) { c i ( s i , x i ) + g i +1 ( φ i ( s i , x i )) } , min i = 1 , . . . , n ◮ exact DDs are strongly related to DP ◮ J. N. Hooker, Decision Diagrams and Dynamic Programming , 2013. ◮ each DP state is associated with a node in the DD ◮ root node s 0 , target node s n +1 ◮ arc ( s i , φ i ( s i , x i )) with cost c i ( s i , x i ) for each control x i ∈ X i ◮ create a DD based on a DP formulation without solving it ◮ provides recursive formulations of the COP

  8. DDs - Construction Methods ◮ Top-Down Construction (TDC) ◮ compile relaxed DD layer by layer ◮ layer width is limited ◮ if current layer gets too large ⇒ merge nodes ◮ Incremental Refinement (IR) ◮ start with relaxed DD of width one ◮ iteratively refine by splitting nodes and filtering arcs ◮ A ∗ -based Construction (A ∗ C) ◮ construct a relaxed DD by a modified A ∗ algorithm ◮ the size of the open list is limited by parameter φ ◮ if φ would be exceeded, nodes are merged ◮ for PC-JSOCMSR: obtained smaller DDs with stronger bounds in shorter time

  9. Resource Constrained Job Scheduling (RCJS) ◮ jobs J = { 1 , . . . , n } ◮ machines M = { 1 , . . . , l } ◮ one renewable shared resource ◮ each job j ∈ J has ◮ an assigned machine m j ∈ M ◮ a release time r j , a due time d j and a processing time p j ◮ weight w j ◮ cumulative resource requirement g j ◮ set of preceding jobs Γ j ◮ total amount of resource consumed by concurrently executed jobs is limited by G ◮ objective: minimize the total weighted tardiness

  10. Solution Representation ◮ represented by permutation π of jobs ◮ assuming that π satisfies the precedence constraints ◮ a feasible schedule S ( π ) from π can be obtained by ◮ assigning start time s i for job i in order of π ◮ such that all constraints are satisfied ◮ objective function: f ( S ( π )) = � n i =1 w i max(0 , s i + p j − d j ) time horizon T = [ T min , . . . , T max ] T min = min r ∈ J r j and T max = max � r ∈ J r j + p j j ∈ J

  11. Motivation ◮ mining supply chains ◮ transfer minerals from mining sites to ports by rail or road ◮ rail wagons and trucks are limited and have to be shared ◮ materials have to arrive at the ports by specific times, otherwise demurrage costs must be paid

  12. Earlier Work on RCJS Singh, G., Ernst, A.T., (2011) ◮ integer linear programming (ILP) formulation ◮ Lagrangian relaxation (LR) based heuristic ◮ a simulated annealing (SA) based approaches ◮ genetic algorithm (GA) Singh, G., Ernst, A.T., (2012) ◮ Lagrangian Particle Swarm Optimization Thirduvady, D., Singh, G., Ernst, A.T., (2014) ◮ combining column generation (CG) and LR with ◮ ant colony optimization (ACO) Thirduvady, D., Singh, G., Ernst, A.T., (2016) ◮ parallelization of ACO and SA Further related works on RCJS with hard deadlines,. . .

  13. Mixed Integer Programming Formulation Model T max � � c jt � z jt − z jt − 1 � min j ∈ J t = T min+1 ∀ j ∈ J, ∀ t = T min , . . . , r j + p j − 1 z jt = 0 � � � z jt + pj − z jt ≤ 1 ∀ t ∈ T , ∀ m ∈ M j ∈ Jm ∀ j ∈ J, ∀ T min + 1 . . . , T max z jt ≥ z jt − 1 ∀ j ∈ J z jT max = 1 z kt ≤ z jt − pk ∀ j → k, ∀ t ∈ T � � � g j z jt + pj − z jt ≤ G ∀ t ∈ T j ∈ J Variables ◮ Decision variables z jt : one if job j is completed at time t or earlier ◮ Costs: c jt = w j max ( t − d j , 0) , ∀ t ∈ T , ∀ j ∈ J

  14. Lagrangian Relaxation Sup-problems for each machine m ∈ M :   min { p j ,t } � � �  ( z jt − z j,t − 1 ) L m ( λ ) = min  c jt + λ t − i g j z j ∈ J m t> 0 i =1 subject to . . . Lagrangian function: � � L ( λ ) = L m ( λ ) − G λ t m ∈ M t ∈T

  15. States and Transitions DD for RCJS: directed acyclic multigraph M = ( V, A ) Each node u ∈ V corresponds to a state ( P ( u ) , ˆ P ( u ) , t ( u ) , η ( u )) ◮ set P ( u ) ⊆ J of jobs that can be scheduled immediately ◮ set ˆ P ( u ) ⊆ J of jobs that are already scheduled ◮ vector t ( u ) = ( t r ( u )) m ∈ M of earliest times for each machine m ◮ vector η ( u ) = ( η t ( u )) t ∈T representing the resource consumption at time t Initial (root) state: r = ( J, ∅ , (0 , . . . , 0) , (0 , . . . , 0)) An arc ( v, w ) ∈ A represents a transition from ( P ( v ) , ˆ P ( v ) , t ( v ) , η ( v )) to ( P ( u ) , ˆ P ( u ) , t ( u ) , η ( u )) by scheduling a job j ∈ P ( v ) at its earliest possible time considering t ( v ) j | c j v u

  16. States and Transitions DD for RCJS: directed acyclic multigraph M = ( V, A ) Each node u ∈ V corresponds to a state ( P ( u ) , ˆ P ( u ) , t ( u ) , η ( u )) ◮ set P ( u ) ⊆ J of jobs that can be scheduled immediately ◮ set ˆ P ( u ) ⊆ J of jobs that are already scheduled ◮ vector t ( u ) = ( t r ( u )) m ∈ M of earliest times for each machine m ◮ � vector η ( u ) = ( η t ( u )) t ∈T representing the resource consumption at time t Initial (root) state: r = ( J, ∅ , (0 , . . . , 0) , (0 , . . . , 0)) An arc ( v, w ) ∈ A represents a transition from ( P ( v ) , ˆ P ( v ) , t ( v ) , η ( v )) to ( P ( u ) , ˆ P ( u ) , t ( u ) , η ( u )) by scheduling a job j ∈ P ( v ) at its earliest possible time considering t ( v ) j | c j v u

  17. Basic lower bound on the total weighted tardiness (1) Preprocessing As long as there exists two jobs j, k ∈ J s.t. j ≪ k and r k < r j + p j ⇒ set r k := r j + p j Observation After preprocessing it frequently happens that there are few jobs j with d j < r j . Naive lower bound � T LB ≪ = w j max(0 , r j + p j − d j ) j ∈ J

  18. Basic lower bound on the total weighted tardiness (2) Idea: sort jobs according to their due times in increasing order and ◮ sum up the total processing time for each job j ∈ J ,   T LB1 � p j ′ − d j = min w j max  0 , j  j ′ ∈ J qj j ′ ∈ J qj | d j ′ ≤ d j ◮ and the the average resource consumption   p j ′ g j ′ T LB2 � = min j ′ ∈ J w j max  0 , − d j j  G j ′ ∈ J | d j ′ ≤ d j Lower bound on the total tardiness: � T LB max( T LB1 , T LB2 due = ) j j j ∈ J

  19. Basic lower bound on the total weighted tardiness (3) Baptiste, P., and Pape, C.L., (2005): relax non-preemption assumption of jobs, two steps ◮ (1) compute vector C m = ( C [1] , . . . , C [ n m ] ) for each m ∈ M ◮ where C [ i ] is a lower bound for the i -th smallest completion time in any schedule ◮ shortest remaining processing time (SRPT) : each time a job becomes available or is completed, a job with the shortest remaining processing time among the available and uncompleted jobs is scheduled ◮ (2) solve assignment problems to get T LB m ∈ M T LB = � C C m ◮ between jobs in J m and the elements in vector C m ◮ optimal solution can be computed by the Hungarian algorithm in cubic time ◮ compute fast lower bound on the assignment problem see: Baptiste, P., and Pape, C.L., (2005)

Recommend


More recommend