scheduling shared continuous resources on many cores
play

Scheduling Shared Continuous Resources On Many-Cores PRESENTER - PowerPoint PPT Presentation

Scheduling Shared Continuous Resources On Many-Cores PRESENTER PRESENTER PRESENTER PRESENTER: LIOR BELINSKY AUTHORS AUTHORS: ANDRE BRINKMANN - PETER KLING EQ - TIM SUB AUTHORS AUTHORS UUUJJJJJIIILARS NAGEL UUUUU - SORE RIECHERSIU


  1. Scheduling Shared Continuous Resources On Many-Cores PRESENTER PRESENTER PRESENTER PRESENTER: LIOR BELINSKY AUTHORS AUTHORS: ANDRE BRINKMANN - PETER KLING EQ - TIM SUB AUTHORS AUTHORS UUUJJJJJIIILARS NAGEL UUUUU - SORE RIECHERSIU - FRIEDHELM MEYER AUF DER HEIDE 1

  2. Presentation Contents First Part: Review the process scheduling problem [~30 minutes] Second Part: Algorithms and Approximations [~20 minutes] 2

  3. First Part Introduction To the CRS HARING Problem Motivation & Usage Hyper Graph Representation Complexity Of The Problem (NPC) 3

  4. Continue Resource Sharing (CRS HARING ) We consider the problem of scheduling a number of jobs on � identical processors sharing a continuously divisible resource. Time is considered discrete and separated by time steps. At every time step � � � the scheduler distributes the resource among the � processors. Each processor � assigned a share � � � � ����� of the resource it can use at time step � . For each processor � there is a sequence of � � � � jobs to process in the given order. the � -th job on processor � will be denoted as ��� �� 4

  5. Continue Resource Sharing (CRS HARING ) Consider a job ��� �� whose processing started at time step � � The job arrives with resource requirement � �� � ����� and a process volume (size) � �� � � � � �� � � The job is granted with a share � � �� � � of the resource, and thus ��� �� � �� � ���� units of � �� are processed at time step � � Therefore, after time step � � finishes, the remaining processing volume �� �� �� � � �� is: � � �� � � � �� �� � � �� � � �� �� � � � ��� �� � �� � ���� The job ��� �� is finished at the minimal time step � ! � � with: # $ % % ( " ��� �� & $' � � ! � �� %)% * 5

  6. How Does A Solution Looks Like Goal finding a resource assignment to processors that minimize the makespan, i.e. the +,-� . ������1234� . 56��78��6�9:��8 +/0 �� � ;�;��� Feasible Solution A schedule consist of � resource assignment functions � � < �� = ��� that specify the resource’s distributions among the processors for all time steps, without overusing the resource. In other Words Our goal is to find a feasible schedule (solution) having a minimal makespan. 6

  7. Scheduler Limitations L System Resource limit - System Resource limit - ∀� � � : " � � � K � System Resource limit System Resource limit - - �) Per job Resource limit Per job Resource limit - - ∀� � � : the resource share of each Per job Per job Resource limit Resource limit - - job �� � is capped by � �� Observation 1 – any feasible schedule for our problem needs at least R $ L " " � time steps to finish a given set of jobs. �� �) �) 7

  8. CRS HARING Simplified Model Consider a job ��� �� whose processing started at time step � � The job arrives with resource requirement � �� � ����� and a process volume � �� � � � � �� � � The job is granted with a share � � �� � � of the resource, and thus ��� �� � �� � ���� units are processed at time step � � Therefore, after time step � � finishes, the remained processing volume �� �� �� � � �� is: � � �� � � � �� �� � � �� � � � ��� �� � �� � ���� The job ��� �� is finished at the minimal time step � ! � � with: # $ % % ( " ��� �� & $' � � ! � %)% * 8

  9. Motivation & Usage Exceed computational performance Devices or energy consumption are not the only bottleneck of a computation. Distribution of the bandwidth (resource) shared by processors can speed up the computation. Usage Examples Many-core systems - chip’s cores share a single data bus to the outside UUUUUUUUUUUUUUIworld. Virtual systems – different virtual machines share a single divisible U UUUUUUUUU resource of a given host system. 9

  10. Example For a Better Understanding The problem is kind of similar to running machines at the gym 10

  11. Additional Terms & Notions Term/Notation Meaning �� , �) The � -th job on processor � The share of resource granted to processor � at time step � � � � � � The number of jobs that will be processed by processor � � � (�) The number of unfinished jobs in processor � at the start of time � Job (�, �) is active in time step � if � � − � � � = � − 1 Active job Processor ��� is active in time step �� if � � � > 0 Active processor S � ≔ {�|� � ≥ �} The set of all processors having at least j jobs to process 11

  12. Graphical Representation HyperGraph V = (W � X� consist of a finite set W� of vertices, and edges X which iiis a non-empty subset of V. For example: W = Y � Y Z � [ � Y \ X � 8 � 8 Z � 8 ] � 8 ^ 8 � Y � Y Z � Y ] 8 Z � Y Z � Y ] 8 ] � Y ] � Y _ � Y ` 8 ^ � .Y ^ ; 12

  13. Model’s HyperGraph Representation Given a problem instance of CRS HARING with unit size jobs and corresponding iischedule a , we define a weighted HyperGraph V 4 = W � X named the ii scheduling graph of a : �W = �� � � � � �� � � ��b ���� � � � ��; W� = Jobs X � 8 � 8 Z � [ � 8 4 X� = Time steps 8 % � Active jobs J�� � a � the edge 8 % c d is defined as follows: at time � 8 % T � �� � �� ��� � � � ����� b ����� � � � � � � � � ����;� �J� �� � � W��� e8�fg� �� � � � �� 13

  14. Scheduling Graph Of S - Illustration Processor 1 W� = jobs X� = time steps 8 % = active jobs Processor 2 at time � Processor 3 14

  15. Connected Components The connected components formed by the edges of scheduling graph V 4 carry iiia lot of information about the schedule 15

  16. Connected Components Notation Meaning h Number of connected components The i -th connected 5 j component ( i ∈ [h]) # j The number of edges of the i -th component Component The size of the first edge in class l j the i -th component 16

  17. Connected Components Observation 2 – consider a connected component 5 c d of V 4 and two time iiisteps � K � Z with 8 % ( m 8 % n c o . then for all � � .� � [ � � Z ; we have 8 % c o . 17

  18. CRS HARING Complexity Theorem – CRS HARING with jobs of unit size is NP-hard uuuuuuuuuu if the number of processors is part of the input. Open Question For a constant P ROOF Highlight � ≥ p , Does the Reduction from the PARTITION problem CRS HARING remain NP-hard? � processors Reduction � elements 3 jobs on each processor PARTITION CRS HARING 18

  19. Second Part Round Robin Approximation Unique properties expected of a feasible schedule (solution) Algorithm for 2 processors A (q − L ) approximation for �� processors 19

  20. Round Robin Approximation Let � be the maximal number of jobs on a processor. The algorithm operates in � phases s.t. during phase � it processes the � -th job on any Uremained processor. Theorem - The RoundRobin algorithm for the CRSharing problem with unit sized jobs has a UUUUUUUUIworst-case approximation ratio of exactly q P ROOF The � -th phase requires exactly v � �� time steps. ��s ' R Hence all � phases last " r R K �� � " " � K tu: � tu: � qtu: v � �� �) �� �) � �s ' ��w � 20

  21. Structural Properties Each Reasonable Schedule for the CRS HARING problem should have the following basic properties: L s Non-Wasting – finishes all active jobs during every time step � with " �� � � x � �) Progressive – among all jobs that are assigned resources, at most one job is only partially processed during any time step �y more formally: J� � � , ��� �� � � � � � � � � ��b���C z A � �� �U�K � Balanced – whenever a processor � finishes a job at time step � � | � > � � � does also finish a job . i any processor �{ with � � 21

  22. Non-Wasting & Progressive Schedules Given an arbitrary schedule a�� we can transform it into a non-wasting and progressive schedule a{ with a | K ay Moreover, the resulting schedule a{ finishes at least one job per time step Non-Wasting – finishes all active jobs during every time step � with L i " �� � � x � �) Progressive – among all jobs that are assigned resources, at most one job is only partially processed during any time step �y more formally: J� � � , ��� �� � � � � � � � � ��b���C z A � �� �U�K � 22

  23. Balanced Schedules For every balanced schedule, 2 processors with � � ≥ � �Z and for all � � � we have : - zZ A � � K � � � K - zZ A � - z � - zZ Balanced – whenever a processor � finishes a job at time step � � | � > � � � does also finish a job . I any processor �{ with � � 23

Recommend


More recommend