A Flexible Approach to Staged Events Tiago Salmito tsalmito@inf.puc-rio.br Ana Lúcia de Moura Noemi Rodriguez 6th International Workshop on Parallel Programming Models and Systems Software for High-End Computing (P2S2) October 1 st 2013 – Lyon, France
Concurrency Models Automatic Stack management Cooperative Multithread threads Manual Event-driven Cooperative Preemptive Task management * A. Adya, J. Howell, M. Theimer, W. J. Bolosky and J. R. Douceur. Cooperative Task Management Without Manual Stack Management. (2002) 2
Hybrid Concurrency Models • Models combining threads and events • Programing model bias: – Hybrid event-driven • More than one concurrent event loops – Hybrid thread based • Converts (user) threads to cooperative events during runtime – Staged event-driven • Does not have a clear bias towards events or threads • Pipeline processing 3
4
The Staged Model • Inspired by SEDA – Staged Event-Driven Architecture • Flexibility – Exposes both concurrency models • Characteristics: – Applications are designed as a collection of stages – Stages are multithreaded modules – Asynchronous processing (event-driven communication) • Decoupled scheduling – Local policies – Resource aware 5
Stages Stage Dispatch Event events Event queue handler Scheduler ... Thread pool Adjust Observe parameters state Controller 6
Stages: Some issues • Coupling – Specification • Hinder reuse – Execution • Single (shared) address space • Use of operating system threads – Thread sharing • Local and global state sharing – Race conditions – Distributed resources 7
Extending the Staged Model • Objective: Decoupling – Decisions related to the application logic and decisions related to the execution environment • Characteristics – Stepwise application development – Stage composition and reuse – Cooperative execution with multiple threads 8
PCAM Design Methodology • Partitioning – Functional or domain Partitioning Communication decomposition Problem • Communication Agglomeration – Data exchange • Agglomeration Mapping – Processing and communication granularity • Mapping – Mapping tasks to processors 9
Stepwise development Programming Stage Stage Stage • Programming Stages Event Event Event ... handler handler handler – Functional decomposition – State isolation Communication • Transient state 2 Connector – Domain decomposition 5 • Persistent state 1 3 4 – Atomic execution • Communication Agglomeration – Connectors: Application graph Cluster 2 – Output ports and event queues 1 5 • Agglomeration 3 4 – Clusters of stages – Scheduling domain Mapping Process 1 • Mapping Controller Process 3 1 2 – Execution locality 5 Process 2 Controller Controller 3 4 10
Leda • Distributed platform for staged applications • Implemented in C and Lua – Scripting environment – Use of C for CPU-intensive operations • Declarative application description – Application graph – Execution configuration 11
Example: echo server require 'leda' local port=5000 local server=leda.stage{ handler =function() local server_sock=assert(socket.bind("*", port)) while true do local cli_sock=assert(server_sock:accept()) leda.send ("client",cli_sock) end end, init =function() require'leda.utils.socket' end, }:push() local reader=leda.stage{ handler =function(sock) repeat local msg,err=sock:receive() leda.send ("message",msg) until msg==nil end } local echo=leda.stage(function(msg) print(msg) end) local graph=leda.graph{ server"client" ..reader, reader"message" ..echo } graph :run() 12
Evaluation Stage 1 Workgen Stage 2 Reducer Stage 3 13
Internal statistics Worst case scenario Best case scenario 14
Final Remarks • Hybrid concurrency – Event-driven, thread based or staged • An extension to the staged model – Stepwise application development • Implementation of a distributed platform for staged applications – Leda 15
A Flexible Approach to Staged Events Tiago Salmito tsalmito@inf.puc-rio.br Ana Lúcia de Moura Noemi Rodriguez 6th International Workshop on Parallel Programming Models and Systems Software for High-End Computing (P2S2) October 1 st 2013 – Lyon, France
Extra: Runtime Architecture Cluster 1 Cluster 2 S5 S6 S3 S2 Application S1 S8 S4 S7 Process 1 S3 S2 To process 2 Cluster S1 S4 Idle instances Ready queue S1 S2 Instances S3 S4 Event queues Thread Marshalling S1 Thread S2 To/from S3 Thread Scheduler other S4 processes Thread Thread pool Asynchronous events Runtime Waiting Statistics I/O interfaces instances Controller 17
Recommend
More recommend