simultaneous multi threaded design
play

Simultaneous Multi- Threaded Design Virendra Singh Associate - PowerPoint PPT Presentation

Simultaneous Multi- Threaded Design Virendra Singh Associate Professor C omputer A rchitecture and D ependable S ystems L ab Department of Electrical Engineering Indian Institute of Technology Bombay http://www.ee.iitb.ac.in/~viren/ E-mail:


  1. Simultaneous Multi- Threaded Design Virendra Singh Associate Professor C omputer A rchitecture and D ependable S ystems L ab Department of Electrical Engineering Indian Institute of Technology Bombay http://www.ee.iitb.ac.in/~viren/ E-mail: viren@ee.iitb.ac.in EE-739: Processor Design Lecture 34 (09 April 2013) CADSL

  2. Program vs Process • Program is a passive entity which specifies the logic of data manipulation and IO action • Process is an active entity which performs the actions specified in a program • Multiple execution of a program process leads to concurrent processes 09 Apr 2013 EE-739@IITB 2 CADSL 2

  3. Process • Process is a program in execution that can be in a number of states – running, waiting, ready, terminated • Process creation – fork() and exec() system calls • Inter-process communications – Shared memory, and message passing • Client-server communication – Socket, RPC, RMI 09 Apr 2013 EE-739@IITB 3 CADSL 3

  4. Threads • A thread (a lightweight process) is a basic unit of CPU utilization. • A thread has a single sequential flow of control. • A thread is comprised of: A thread ID, a program counter, a register set and a stack. • A process is the execution environment in which threads run. – (Recall previous definition of process: program in execution). • The process has the code section, data section, OS resources (e.g. open files and signals). • Traditional processes have a single thread of control • Multi-threaded processes have multiple threads of control 09 Apr 2013 EE-739@IITB 4 CADSL – The threads share the address space and resources of

  5. Single and Multithreaded Processes Threads encapsulate concurrency: “Active” component Address spaces encapsulate protection: “Passive” part Keeps buggy program from trashing the system 09 Apr 2013 EE-739@IITB 5 CADSL 5

  6. Processes vs. Threads Which of the following belong to the process and which to the thread? Process Program code: Thread local or temporary data: Process global data: Process allocated resources: Thread execution stack: Process memory management info: Thread Program counter: Process Parent identification: Thread Thread state: Thread Registers: 09 Apr 2013 EE-739@IITB 6 CADSL

  7. Control Blocks • The thread control block (TCB) contains: – Thread state, Program Counter, Registers • PCB' = everything else (e.g. process id, open files, etc.) • The process control block (PCB) = PCB' U TCB 09 Apr 2013 EE-739@IITB 7 CADSL

  8. Why use threads? • Because threads have minimal internal state, it takes less time to create a thread than a process (10x speedup in UNIX). • It takes less time to terminate a thread. • It takes less time to switch to a different thread. • A multi-threaded process is much cheaper than multiple (redundant) processes. 09 Apr 2013 EE-739@IITB 8 CADSL

  9. Examples of Using Threads • Threads are useful for any application with multiple tasks that can be run with separate threads of control. • A Word processor may have separate threads for: – User input – Spell and grammar check – displaying graphics – document layout • A web server may spawn a thread for each client – Can serve clients concurrently with multiple threads. – It takes less overhead to use multiple threads than to use multiple processes. 09 Apr 2013 EE-739@IITB 9 CADSL

  10. Examples of multithreaded programs • Most modern OS kernels – Internally concurrent because have to deal with concurrent requests by multiple users – But no protection needed within kernel • Database Servers – Access to shared data by many concurrent users – Also background utility processing must be done • Parallel Programming (More than one physical CPU) – Split program into multiple threads for 09 Apr 2013 EE-739@IITB 10 CADSL

  11. Multithreaded Matrix Multiply... X = A B C C[1,1] = A[1,1]*B[1,1]+A[1,2]*B[2,1].. …. C[m,n]=sum of product of corresponding elements in row of A and column of B. Each resultant element can be computed independently. 09 Apr 2013 EE-739@IITB 11 CADSL

  12. Multithreaded Matrix Multiply typedef struct { int id; int size; int row, column; matrix *MA, *MB, *MC; } matrix_work_order_t; main() { int size = ARRAY_SIZE, row, column; matrix_t MA, MB,MC; matrix_work_order *work_orderp; pthread_t peer[size*zize]; ... 09 Apr 2013 EE-739@IITB 12 CADSL

  13. Multithreaded Matrix Multiply /* process matrix, by row, column */ for( row = 0; row < size; row++ ) for( column = 0; column < size; column++) { id = column + row * ARRAY_SIZE; work_orderp = malloc( sizeof(matrix_work_order_t)); /* initialize all members if wirk_orderp */ pthread_create(peer[id], NULL, peer_mult, work_orderp); } } /* wait for all peers to exist*/ for( i =0; i < size*size;i++) pthread_join( peer[i], NULL ); } 09 Apr 2013 EE-739@IITB 13 CADSL

  14. Benefits • Responsiveness: – Threads allow a program to continue running even if part is blocked. – For example, a web browser can allow user input while loading an image. • Resource Sharing: – Threads share memory and resources of the process to which they belong. • Economy: – Allocating memory and resources to a process is costly. – Threads are faster to create and faster to switch between. • Utilization of Multiprocessor Architectures: – Threads can run in parallel on different processors. – A single threaded process can run only on one processor no matter how many are available. 09 Apr 2013 EE-739@IITB 14 CADSL

  15. Thread Level Parallelism (TLP) • ILP exploits implicit parallel operations within a loop or straight-line code segment • TLP explicitly represented by the use of multiple threads of execution that are inherently parallel • Goal: Use multiple instruction streams to improve 1. Throughput of computers that run many programs 2. Execution time of multi-threaded programs • TLP could be more cost-effective to exploit 09 Apr 2013 EE-739@IITB 15 CADSL than ILP

  16. New Approach: Mulithreaded Execution • Multithreading: multiple threads to share the functional units of one processor via overlapping – processor must duplicate independent state of each thread e.g., a separate copy of register file, a separate PC, and for running independent programs, a separate page table – memory shared through the virtual memory mechanisms, which already support multiple processes – HW for fast thread switch; much faster than full process switch ≈ 100s to 1000s of clocks • When switch? – Alternate instruction per thread (fine grain) – When a thread is stalled, perhaps for a cache miss, another thread can be executed (coarse grain) 09 Apr 2013 EE-739@IITB 16 CADSL

  17. Fine-Grained Multithreading • Switches between threads on each instruction, causing the execution of multiples threads to be interleaved • Usually done in a round-robin fashion, skipping any stalled threads • CPU must be able to switch threads every clock • Advantage is it can hide both short and long stalls, since instructions from other threads executed when one thread stalls • Disadvantage is it slows down execution of individual threads, since a thread ready to execute without stalls will be delayed by instructions from other threads • Used on Sun’s Niagara 09 Apr 2013 EE-739@IITB 17 CADSL

  18. Course-Grained Multithreading • Switches threads only on costly stalls, such as L2 cache misses • Advantages – Relieves need to have very fast thread-switching – Doesn’t slow down thread, since instructions from other threads issued only when the thread encounters a costly stall 09 Apr 2013 EE-739@IITB 18 CADSL

  19. Course-Grained Multithreading • Disadvantage is hard to overcome throughput losses from shorter stalls, due to pipeline start- up costs – Since CPU issues instructions from 1 thread, when a stall occurs, the pipeline must be emptied or frozen – New thread must fill pipeline before instructions can complete • Because of this start-up overhead, coarse- grained multithreading is better for reducing penalty of high cost stalls, where pipeline refill << stall time • Used in IBM AS/400 09 Apr 2013 EE-739@IITB 19 CADSL

  20. For most apps, most execution units lie idle For an 8-way superscalar. From: Tullsen, Eggers, and Levy, “ Simultaneous Multithreading: Maximizing On-chip Parallelism, ISCA 1995. 09 Apr 2013 EE-739@IITB 20 CADSL

  21. Do both ILP and TLP? • TLP and ILP exploit two different kinds of parallel structure in a program • Could a processor oriented at ILP to exploit TLP? – functional units are often idle in data path designed for ILP because of either stalls or dependences in the code • Could the TLP be used as a source of independent instructions that might keep the processor busy during stalls? • Could TLP be used to employ the functional units that would otherwise lie idle when 09 Apr 2013 EE-739@IITB 21 CADSL insufficient ILP exists?

  22. Simultaneous Multi-threading ... One thread, 8 units Two threads, 8 units Cycle M M FX FX FP FP BR CC Cycle M M FX FX FP FP BR CC 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 M = Load/Store, FX = Fixed Point, FP = Floating Point, BR = Branch, CC = Condition Codes 09 Apr 2013 EE-739@IITB 22 CADSL

Recommend


More recommend