flow shop and job shop
play

Flow Shop and Job Shop Marco Chiarandini Department of Mathematics - PowerPoint PPT Presentation

DM204 Autumn 2013 Scheduling, Timetabling and Routing Flow Shop and Job Shop Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. Dynamic Programming 2. Parallel Machine Models


  1. DM204 – Autumn 2013 Scheduling, Timetabling and Routing Flow Shop and Job Shop Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark

  2. Outline 1. Dynamic Programming 2. Parallel Machine Models CPM/PERT 3. Flow Shop Introduction Makespan calculation Johnson’s algorithm Construction heuristics Iterated Greedy Efficient Local Search and Tabu Search 4. Job Shop Modelling Exact Methods Shifting Bottleneck Heuristic Local Search Methods 5. Job Shop Generalizations 2

  3. Course Overview ✔ Scheduling Timetabling ✔ Classification Sport Timetabling ✔ Complexity issues Reservations and Education ✔ Single Machine University Timetabling ✔ Parallel Machine Crew Scheduling Flow Shop and Job Shop Public Transports Resource Constrained Project Vechicle Routing Scheduling Model Capacited Models Time Windows models Rich Models 3

  4. Outline 1. Dynamic Programming 2. Parallel Machine Models CPM/PERT 3. Flow Shop Introduction Makespan calculation Johnson’s algorithm Construction heuristics Iterated Greedy Efficient Local Search and Tabu Search 4. Job Shop Modelling Exact Methods Shifting Bottleneck Heuristic Local Search Methods 5. Job Shop Generalizations 4

  5. 1 | | � h j ( C j ) 1 | | � h j ( C j ) 1 | | � h j ( C j ) A lot of work done on 1 | | � w j T j [single-machine total weighted tardiness] 1 | | � T j is hard in ordinary sense, hence admits a pseudo polynomial algorithm (dynamic programming in O ( n 4 � p j ) ) 1 | | � w j T j strongly NP-hard (reduction from 3-partition) 5

  6. 1 | | � h j ( C j ) 1 | | � h j ( C j ) 1 | | � h j ( C j ) generalization of � w j T j hence strongly NP-hard (forward) dynamic programming algorithm O ( 2 n ) J set of jobs already scheduled; V ( J ) = � j ∈ J h j ( C j ) Step 1: Set J = ∅ , V ( j ) = h j ( p j ) , j = 1 , . . . , n � �� �� Step 2: V ( J ) = min j ∈ J V ( J − { j } ) + h j k ∈ J p k Step 3: If J = { 1 , 2 , . . . , n } then V ( { 1 , 2 , . . . , n } ) is optimum, otherwise go to Step 2. 6

  7. Outline 1. Dynamic Programming 2. Parallel Machine Models CPM/PERT 3. Flow Shop Introduction Makespan calculation Johnson’s algorithm Construction heuristics Iterated Greedy Efficient Local Search and Tabu Search 4. Job Shop Modelling Exact Methods Shifting Bottleneck Heuristic Local Search Methods 5. Job Shop Generalizations 8

  8. Pm | | C max Pm | | C max Pm | | C max (without preemption) P ∞ | prec | C max CPM 4 1 Pm | | C max LPT heuristic, approximation ratio: 3 − 3 m Pm | prec | C max strongly NP-hard, LNS heuristic (non optimal) Pm | p j = 1 , M j | C max LFJ-LFM (optimal if M j are nested) 9

  9. Project Planning 11

  10. Project Planning 11

  11. Project Planning 11

  12. Project Planning 11

  13. Outline 1. Dynamic Programming 2. Parallel Machine Models CPM/PERT 3. Flow Shop Introduction Makespan calculation Johnson’s algorithm Construction heuristics Iterated Greedy Efficient Local Search and Tabu Search 4. Job Shop Modelling Exact Methods Shifting Bottleneck Heuristic Local Search Methods 5. Job Shop Generalizations 12

  14. Flow Shop General Shop Scheduling: J = { 1 , . . . , N } set of jobs; M = { 1 , 2 , . . . , m } set of machines J j = { O ij | i = 1 , . . . , n j } set of operations for each job p ij processing times of operations O ij µ ij ⊆ M machine eligibilities for each operation precedence constraints among the operations one job processed per machine at a time, one machine processing each job at a time C j completion time of job j ➨ Find feasible schedule that minimize some regular function of C j Flow Shop Scheduling: µ ij = i , i = 1 , 2 , . . . , m precedence constraints: O ij → O i + 1 , j , i = 1 , 2 , . . . , n for all jobs 14

  15. Example Gantt chart schedule representation π 1 , π 2 , π 3 , π 4 : π 1 : O 11 , O 12 , O 13 , O 14 π 2 : O 21 , O 22 , O 23 , O 24 π 3 : O 31 , O 32 , O 33 , O 34 π 4 : O 41 , O 42 , O 43 , O 44 we assume unlimited buffer if same job sequence on each machine ➨ permutation flow shop 15

  16. Directed Graph Representation Given a sequence: operation-on-node network, jobs on columns, and machines on rows 17

  17. Directed Graph Representation Recursion for C max i � C i ,π ( 1 ) = p l ,π ( 1 ) l = 1 Computation cost? j � C 1 ,π ( j ) = p l ,π ( l ) l = 1 C i ,π ( j ) = max { C i − 1 ,π ( j ) , C i ,π ( j − 1 ) } + p i ,π ( j ) 18

  18. Example C max = 34 corresponds to longest path 19

  19. Fm | | C max Theorem There always exist an optimum sequence without change in the first two and last two machines. Proof: By contradiction. Corollary F 2 | | C max and F 3 | | C max are permutation flow shop Note: F 3 | | C max is strongly NP-hard 21

  20. F 2 | | C max Intuition: give something short to process to 1 such that 2 becomes operative and give something long to process to 2 such that its buffer has time to fill. Construct a sequence T : T ( 1 ) , . . . , T ( n ) to process in the same order on both machines by concatenating two sequences: a left sequence L : L ( 1 ) , . . . , L ( t ) , and a right sequence R : R ( t + 1 ) , . . . , R ( n ) , that is, T = L ◦ R [Selmer Johnson, 1954, Naval Research Logistic Quarterly] Let J be the set of jobs to process Let T , L , R = ∅ Step 1 Find ( i ∗ , j ∗ ) such that p i ∗ , j ∗ = min { p ij | i ∈ 1 , 2 , j ∈ J } Step 2 If i ∗ = 1 then L = L ◦ { i ∗ } else if i ∗ = 2 then R = { i ∗ } ◦ R Step 3 J := J \ { j ∗ } Step 4 If J � = ∅ go to Step 1 else T = L ◦ R 22

  21. Theorem The sequence T : T ( 1 ) , , . . . , T ( n ) is optimal. Proof Assume at one iteration of the algorithm that job k has the min processing time on machine 1. Show that in this case job k has to go first on machine 1 than any other job selected later. By contradiction, show that if in a schedule S a job j precedes k on machine 1 and has larger processing time on 1, then S is a worse schedule than S ′ . There are three cases to consider. Iterate the proof for all jobs in L . Prove symmetrically for all jobs in R . 23

  22. Construction Heuristics (1) Fm | prmu | C max Slope heuristic schedule in decreasing order of A j = − � m i = 1 ( m − ( 2 i − 1 )) p ij Campbell, Dudek and Smith’s heuristic (1970) extension of Johnson’s rule to when permutation is not dominant recursively create 2 machines 1 and m − 1 i m � � p ′ p ′′ ij = p kj ij = p kj k = 1 k = m − i + 1 and use Johnson’s rule repeat for all m − 1 possible pairings return the best for the overall m machine problem 26

  23. Construction Heuristics (2) Fm | prmu | C max Nawasz, Enscore, Ham’s heuristic (1983) Step 1: sort in decreasing order of � m i = 1 p ij Step 2: schedule the first 2 jobs at best Step 3: insert all others in best position Implementation in O ( n 2 m ) [Framinan, Gupta, Leisten (2004)] examined 177 different arrangements of jobs in Step 1 and concluded that the NEH arrangement is the best one for C max . 27

  24. Iterated Greedy Fm | prmu | C max Iterated Greedy [Ruiz, Stützle, 2007] Destruction: remove d jobs at random Construction: reinsert them with NEH heuristic in the order of removal Local Search: insertion neighborhood (first improvement, whole evaluation O ( n 2 m ) ) Acceptance Criterion: random walk, best, SA-like Performance on up to n = 500 × m = 20 : NEH average gap 3.35% in less than 1 sec. IG average gap 0.44% in about 360 sec. 29

  25. Efficient local search for Fm | prmu | C max Tabu search (TS) with insert neighborhood. TS uses best strategy. ➨ need to search efficiently! Neighborhood pruning [Novicki, Smutnicki, 1994, Grabowski, Wodecki, 2004] A sequence t = ( t 1 , t 2 , . . . , t m − 1 ) defines a path in π : C max expression through critical path: 31

  26. critical path: � u = ( u 1 , u 2 , . . . , u m ) : C max ( π ) = C ( π, u ) Block B k and Internal Block B Int k Theorem (Werner, 1992) Let π, π ′ ∈ Π , if π ′ has been obtained from π by a job insert so that C max ( π ′ ) < C max ( π ) then in π ′ : a) at least one job j ∈ B k precedes job π ( u k − 1 ) , k = 1 , . . . , m, or b) at least one job j ∈ B k succeeds job π ( u k ) , k = 1 , . . . , m 32

  27. Corollary (Elimination Criterion) If π ′ is obtained by π by an “internal block insertion” then C max ( π ′ ) ≥ C max ( π ) . Hence we can restrict the search to where the good moves can be: 33

  28. Further speedup: Use of lower bounds in delta evaluations: Let δ r x , u k indicate insertion of x after u k (move of type ZR k ( π ) ) � p π ( x ) , k + 1 − p π ( u k ) , k + 1 x � = u k − 1 ∆( δ r x , u k ) = p π ( x ) , k + 1 − p π ( u k ) , k + 1 + p π ( u k − 1 + 1 ) , k − 1 − p π ( x ) , k − 1 x = u k − 1 That is, add and remove from the adjacent blocks It can be shown that: C max ( δ r x , u k ( π )) ≥ C max ( π ) + ∆( δ r x , u k ) Theorem (Nowicki and Smutnicki, 1996, EJOR) The neighborhood thus defined is connected. 34

  29. Metaheuristic details: Prohibition criterion: an insertion δ x , u k is tabu if it restores the relative order of π ( x ) and π ( x + 1 ) . � n � Tabu length: TL = 6 + 10 m Perturbation perform all inserts among all the blocks that have ∆ < 0 activated after MaxIdleIter idle iterations 35

Recommend


More recommend