on the computational complexity of periodic scheduling
play

On the Computational Complexity of Periodic Scheduling PhD defense - PowerPoint PPT Presentation

On the Computational Complexity of Periodic Scheduling PhD defense Thomas Rothvo Real-time Scheduling Given: (synchronous) tasks 1 , . . . , n with i = ( c ( i ) , d ( i ) , p ( i )) Real-time Scheduling Given: (synchronous)


  1. On the Computational Complexity of Periodic Scheduling PhD defense Thomas Rothvoß

  2. Real-time Scheduling Given: (synchronous) tasks τ 1 , . . . , τ n with τ i = ( c ( τ i ) , d ( τ i ) , p ( τ i ))

  3. Real-time Scheduling Given: (synchronous) tasks τ 1 , . . . , τ n with τ i = ( c ( τ i ) , d ( τ i ) , p ( τ i )) running time

  4. Real-time Scheduling Given: (synchronous) tasks τ 1 , . . . , τ n with τ i = ( c ( τ i ) , d ( τ i ) , p ( τ i )) running time (relative) deadline

  5. Real-time Scheduling Given: (synchronous) tasks τ 1 , . . . , τ n with τ i = ( c ( τ i ) , d ( τ i ) , p ( τ i )) running time period (relative) deadline

  6. Real-time Scheduling Given: (synchronous) tasks τ 1 , . . . , τ n with τ i = ( c ( τ i ) , d ( τ i ) , p ( τ i )) running time period (relative) deadline W.l.o.g.: Task τ i releases job of length c ( τ i ) at z · p ( τ i ) and absolute deadline z · p ( τ i ) + d ( τ i ) ( z ∈ N 0 )

  7. Real-time Scheduling Given: (synchronous) tasks τ 1 , . . . , τ n with τ i = ( c ( τ i ) , d ( τ i ) , p ( τ i )) running time period (relative) deadline W.l.o.g.: Task τ i releases job of length c ( τ i ) at z · p ( τ i ) and absolute deadline z · p ( τ i ) + d ( τ i ) ( z ∈ N 0 ) u ( τ i ) = c ( τ i ) Utilization: p ( τ i )

  8. Real-time Scheduling Given: (synchronous) tasks τ 1 , . . . , τ n with τ i = ( c ( τ i ) , d ( τ i ) , p ( τ i )) running time period (relative) deadline W.l.o.g.: Task τ i releases job of length c ( τ i ) at z · p ( τ i ) and absolute deadline z · p ( τ i ) + d ( τ i ) ( z ∈ N 0 ) u ( τ i ) = c ( τ i ) Utilization: p ( τ i ) Settings: ◮ static priorities ↔ dynamic priorities

  9. Real-time Scheduling Given: (synchronous) tasks τ 1 , . . . , τ n with τ i = ( c ( τ i ) , d ( τ i ) , p ( τ i )) running time period (relative) deadline W.l.o.g.: Task τ i releases job of length c ( τ i ) at z · p ( τ i ) and absolute deadline z · p ( τ i ) + d ( τ i ) ( z ∈ N 0 ) u ( τ i ) = c ( τ i ) Utilization: p ( τ i ) Settings: ◮ static priorities ↔ dynamic priorities ◮ single-processor ↔ multi-processor

  10. Real-time Scheduling Given: (synchronous) tasks τ 1 , . . . , τ n with τ i = ( c ( τ i ) , d ( τ i ) , p ( τ i )) running time period (relative) deadline W.l.o.g.: Task τ i releases job of length c ( τ i ) at z · p ( τ i ) and absolute deadline z · p ( τ i ) + d ( τ i ) ( z ∈ N 0 ) u ( τ i ) = c ( τ i ) Utilization: p ( τ i ) Settings: ◮ static priorities ↔ dynamic priorities ◮ single-processor ↔ multi-processor ◮ preemptive scheduling ↔ non-preemptive

  11. Real-time Scheduling Given: (synchronous) tasks τ 1 , . . . , τ n with τ i = ( c ( τ i ) , d ( τ i ) , p ( τ i )) running time period (relative) deadline W.l.o.g.: Task τ i releases job of length c ( τ i ) at z · p ( τ i ) and absolute deadline z · p ( τ i ) + d ( τ i ) ( z ∈ N 0 ) u ( τ i ) = c ( τ i ) Utilization: p ( τ i ) Settings: ◮ static priorities ↔ dynamic priorities ◮ single-processor ↔ multi-processor ◮ preemptive scheduling ↔ non-preemptive Implicit deadlines: d ( τ i ) = p ( τ i ) Constrained deadlines: d ( τ i ) ≤ p ( τ i )

  12. b b b Example: Implicit deadlines & static priorities c ( τ 1 ) = 1 d ( τ 1 ) = p ( τ 1 ) = 2 time 0 1 2 3 4 5 6 7 8 9 10 c ( τ 2 ) = 2 d ( τ 2 ) = p ( τ 2 ) = 5

  13. b b b Example: Implicit deadlines & static priorities Theorem (Liu & Layland ’73) 1 Optimal priorities: p ( τ i ) for τ i ( Rate-monotonic schedule) c ( τ 1 ) = 1 d ( τ 1 ) = p ( τ 1 ) = 2 time 0 1 2 3 4 5 6 7 8 9 10 c ( τ 2 ) = 2 d ( τ 2 ) = p ( τ 2 ) = 5

  14. b b b Example: Implicit deadlines & static priorities Theorem (Liu & Layland ’73) 1 Optimal priorities: p ( τ i ) for τ i ( Rate-monotonic schedule) c ( τ 1 ) = 1 d ( τ 1 ) = p ( τ 1 ) = 2 time 0 1 2 3 4 5 6 7 8 9 10 c ( τ 2 ) = 2 d ( τ 2 ) = p ( τ 2 ) = 5

  15. b b b Example: Implicit deadlines & static priorities Theorem (Liu & Layland ’73) 1 Optimal priorities: p ( τ i ) for τ i ( Rate-monotonic schedule) c ( τ 1 ) = 1 d ( τ 1 ) = p ( τ 1 ) = 2 time 0 1 2 3 4 5 6 7 8 9 10 c ( τ 2 ) = 2 d ( τ 2 ) = p ( τ 2 ) = 5

  16. Feasibility test for implicit-deadline tasks Theorem (Lehoczky et al. ’89) If p ( τ 1 ) ≤ . . . ≤ p ( τ n ) then the response time r ( τ i ) in a Rate-monotonic schedule is the smallest non-negative value s.t. � r ( τ i ) � � c ( τ i ) + c ( τ j ) ≤ r ( τ i ) p ( τ j ) j<i 1 machine suffices ⇔ ∀ i : r ( τ i ) ≤ p ( τ i ) .

  17. Simultaneous Diophantine Approximation (SDA) Given: ◮ α 1 , . . . , α n ∈ Q ◮ bound N ∈ N ◮ error bound ε > 0 Decide: � � � α i − Z � ≤ ε � � ∃ Q ∈ { 1 , . . . , N } : max � � Q Q i =1 ,...,n

  18. Simultaneous Diophantine Approximation (SDA) Given: ◮ α 1 , . . . , α n ∈ Q ◮ bound N ∈ N ◮ error bound ε > 0 Decide: � � � α i − Z � ≤ ε � � ∃ Q ∈ { 1 , . . . , N } : max � � Q Q i =1 ,...,n ⇔ ∃ Q ∈ { 1 , . . . , N } : max i =1 ,...,n |⌈ Qα i ⌋ − Qα i | ≤ ε

  19. Simultaneous Diophantine Approximation (SDA) Given: ◮ α 1 , . . . , α n ∈ Q ◮ bound N ∈ N ◮ error bound ε > 0 Decide: � � � α i − Z � ≤ ε � � ∃ Q ∈ { 1 , . . . , N } : max � � Q Q i =1 ,...,n ⇔ ∃ Q ∈ { 1 , . . . , N } : max i =1 ,...,n |⌈ Qα i ⌋ − Qα i | ≤ ε ◮ NP -hard [Lagarias ’85]

  20. Simultaneous Diophantine Approximation (SDA) Given: ◮ α 1 , . . . , α n ∈ Q ◮ bound N ∈ N ◮ error bound ε > 0 Decide: � � � α i − Z � ≤ ε � � ∃ Q ∈ { 1 , . . . , N } : max � � Q Q i =1 ,...,n ⇔ ∃ Q ∈ { 1 , . . . , N } : max i =1 ,...,n |⌈ Qα i ⌋ − Qα i | ≤ ε ◮ NP -hard [Lagarias ’85] ◮ Gap version NP -hard [R¨ ossner & Seifert ’96, Chen & Meng ’07]

  21. Simultaneous Diophantine Approximation (2) Theorem (R¨ ossner & Seifert ’96, Chen & Meng ’07) Given α 1 , . . . , α n , N , ε > 0 it is NP -hard to distinguish ◮ ∃ Q ∈ { 1 , . . . , N } : max i =1 ,...,n |⌈ Qα i ⌋ − Qα i | ≤ ε O (1) O (1) ◮ ∄ Q ∈ { 1 , . . . , n log log n N } : max log log n ε i =1 ,...,n |⌈ Qα i ⌋ − Qα i | ≤ n

  22. Simultaneous Diophantine Approximation (2) Theorem (R¨ ossner & Seifert ’96, Chen & Meng ’07) Given α 1 , . . . , α n , N , ε > 0 it is NP -hard to distinguish ◮ ∃ Q ∈ { 1 , . . . , N } : max i =1 ,...,n |⌈ Qα i ⌋ − Qα i | ≤ ε O (1) O (1) ◮ ∄ Q ∈ { 1 , . . . , n log log n N } : max log log n ε i =1 ,...,n |⌈ Qα i ⌋ − Qα i | ≤ n Theorem (Eisenbrand & R. - APPROX’09) Given α 1 , . . . , α n , N , ε > 0 it is NP -hard to distinguish ◮ ∃ Q ∈ { N/ 2 , . . . , N } : max i =1 ,...,n |⌈ Qα i ⌋ − Qα i | ≤ ε ◮ ∄ Q ∈ { 1 , . . . , 2 n O (1) · N } : max O (1) log log n · ε i =1 ,...,n |⌈ Qα i ⌋ − Qα i | ≤ n 2 ) n O (1) . even if ε ≤ ( 1

  23. Directed Diophantine Approximation (DDA) Theorem (Eisenbrand & R. - APPROX’09) Given α 1 , . . . , α n , N , ε > 0 it is NP -hard to distinguish ◮ ∃ Q ∈ { N/ 2 , . . . , N } : max i =1 ,...,n |⌈ Qα i ⌉ − Qα i | ≤ ε i =1 ,...,n |⌈ Qα i ⌉ − Qα i | ≤ 2 n O (1) · ε O (1) ◮ ∄ Q ∈ { 1 , . . . , n log log n · N } : max 2 ) n O (1) . even if ε ≤ ( 1

  24. Directed Diophantine Approximation (DDA) Theorem (Eisenbrand & R. - APPROX’09) Given α 1 , . . . , α n , N , ε > 0 it is NP -hard to distinguish ◮ ∃ Q ∈ { N/ 2 , . . . , N } : max i =1 ,...,n |⌈ Qα i ⌉ − Qα i | ≤ ε i =1 ,...,n |⌈ Qα i ⌉ − Qα i | ≤ 2 n O (1) · ε O (1) ◮ ∄ Q ∈ { 1 , . . . , n log log n · N } : max 2 ) n O (1) . even if ε ≤ ( 1 Theorem (Eisenbrand & R. - SODA’10) Given α 1 , . . . , α n , w 1 , . . . , w n ≥ 0 , N , ε > 0 it is NP -hard to distinguish ◮ ∃ Q ∈ [ N/ 2 , N ] : � n i =1 w i ( Qα i − ⌊ Qα i ⌋ ) ≤ ε i =1 w i ( Qα i − ⌊ Qα i ⌋ ) ≤ 2 n O (1) · ε O (1) log log n · N ] : � n ◮ ∄ Q ∈ [1 , n 2 ) n O (1) . even if ε ≤ ( 1

  25. Hardness of Response Time Computation Theorem (Eisenbrand & R. - RTSS’08) Computing response times for implicit-deadline tasks w.r.t. to a Rate-monotonic schedule, i.e. solving � � n − 1 � � r � min r ≥ 0 | c ( τ n ) + c ( τ i ) ≤ r p ( τ i ) i =1 ( p ( τ 1 ) ≤ . . . ≤ p ( τ n ) ) is NP -hard (even to approximate within a O (1) log log n ). factor of n ◮ Reduction from Directed Diophantine Approximation

  26. Mixing Set min c s s + c T y s + a i y i ≥ b i ∀ i = 1 , . . . , n R ≥ 0 s ∈ Z n y ∈

  27. Mixing Set min c s s + c T y s + a i y i ≥ b i ∀ i = 1 , . . . , n R ≥ 0 s ∈ Z n y ∈ ◮ Complexity status? [Conforti, Di Summa & Wolsey ’08]

  28. Mixing Set min c s s + c T y s + a i y i ≥ b i ∀ i = 1 , . . . , n R ≥ 0 s ∈ Z n y ∈ ◮ Complexity status? [Conforti, Di Summa & Wolsey ’08] Theorem (Eisenbrand & R. - APPROX’09) Solving Mixing Set is NP -hard. 1. Model Directed Diophantine Approximation (almost) as Mixing Set 2. Simulate missing constraint with Lagrangian relaxation

Recommend


More recommend