horizon
play

Horizon Runtime Efficient Event Scheduling in Runtime Efficient - PowerPoint PPT Presentation

Horizon Runtime Efficient Event Scheduling in Runtime Efficient Event Scheduling in Multi-threaded Network Simulation Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Geo g u , o Sto e s, Ja es G oss, aus e e


  1. Horizon Runtime Efficient Event Scheduling in Runtime Efficient Event Scheduling in Multi-threaded Network Simulation Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Geo g u , o Sto e s, Ja es G oss, aus e e http://www.comsys.rwth-aachen.de/ OMNeT++ Workshop, SimuTools, March 2011 Communication and Distributed Systems

  2. Motivation  Need for Complex Network Simulation Models  Detailed channel and PHY characteristics  Large scale P2P and Internet backbone models  High processing and runtime demand g p g  Proliferation of Multi-processor Systems p y  Desktop: 4-8 cores, servers: 24 cores  “Desktop Cluster” es top C uste  Cheap, powerful commodity hardware 2 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

  3. Motivation  Need for Complex Network Simulation Models  Detailed channel and PHY characteristics  Large scale P2P and Internet backbone models  High processing and runtime demand g p g  Proliferation of Multi-processor Systems p y  Desktop: 4-8 cores, servers: 24 cores  “Desktop Cluster” es top C uste  Cheap, powerful commodity hardware  Utilize Parallelization to Cut Runtimes? 3 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

  4. Motivation: Downside of Parallelization  Parallelization Introduces Overhead  Thread synchronization, management of shared data y g  Increased management overhead per event  Negative impact on events of low complexity g p p y  4 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

  5. Motivation: Downside of Parallelization  Parallelization Introduces Overhead  Thread synchronization, management of shared data y g  Increased management overhead per event  Negative impact on events of low complexity g p p y  Dilemma / Tradeoff Dilemma / Tradeoff Performance Overhead 5 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

  6. Motivation: Downside of Parallelization  Parallelization Introduces Overhead  Thread synchronization, management of shared data y g  Increased management overhead per event  Negative impact on events of low complexity g p p y  Dilemma / Tradeoff Dilemma / Tradeoff Performance Overhead  Minimize Parallelization Overhead 6 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

  7. Horizon: Approach  Horizon  Focus on multi-processor systems p y Sim. Model  Centralized architecture  Conservative synchronization y  Determine independent events  Expanded Events  Modeling paradigm g p g Computing  Per event lookahead Cluster / CPUs  Identify independent events y p 7 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

  8. Horizon: Approach  Horizon  Focus on multi-processor systems p y Sim. Model  Centralized architecture  Conservative synchronization y  Determine independent events  Expanded Events  Modeling paradigm g p g  Per event lookahead  Identify independent events y p 8 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

  9. Horizon: Expanded Events  Expanded Events  Model processes that span period of time p p p  Augment discrete events with durations  Discrete events span period of simulated time p p     9 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

  10. Horizon: Expanded Events  Expanded Events  Model processes that span period of time p p p  Augment discrete events with durations  Discrete events span period of simulated time p p expanded event     10 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

  11. Horizon: Expanded Events  Expanded Events  Model processes that span period of time p p p  Augment discrete events with durations  Discrete events span period of simulated time p p expanded event t start t end     11 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

  12. Horizon: Expanded Events  Expanded Events  Model processes that span period of time p p p  Augment discrete events with durations  Discrete events span period of simulated time p p expanded event Trigger processing t start t end     12 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

  13. Horizon: Expanded Events  Expanded Events  Model processes that span period of time p p p  Augment discrete events with durations  Discrete events span period of simulated time p p expanded event Trigger processing Fetch results t start t end     13 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

  14. Horizon: Expanded Events  Expanded Events  Model processes that span period of time p p p  Augment discrete events with durations  Discrete events span period of simulated time p p expanded event Trigger processing Fetch results t start t end Parallelization Window  Independent Events  Events starting between t start and t end  Do not depend on results generated by overlapping event  Modeling paradigm 14 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

  15. Horizon: Expanded Events  Expanded Events  Model processes that span period of time p p p  Augment discrete events with durations  Discrete events span period of simulated time p p expanded event t start t end expanded event  Independent Events  Events starting between t start and t end  Do not depend on results generated by overlapping event  Modeling paradigm 15 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

  16. Challenges How to reduce parallelization overhead? How to reduce parallelization overhead?

  17. Challenges and Solutions  We Address Two Challenges Thread Synchronization Thread Synchronization Event Scheduling Event Scheduling Overhead Overhead 17 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

  18. Challenges and Solutions  We Address Two Challenges Thread Synchronization Thread Synchronization Event Scheduling Event Scheduling Event Scheduling Event Scheduling Overhead Overhead Overhead 18 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

  19. Thread Synchronization Overhead: Challenge  Master/Worker Architecture  Master coordinates simulation progress p g  Workers do actual processing future event set  Synchronization involves y  Workers wait for incoming jobs event scheduler  Access to shared data structures  Straightforward Implementation  Locks condition variables  Locks, condition variables  Workers pull jobs from work queue  If lock occupied or no job available  If lock occupied or no job available  Suspend thread  Free-up CPU resources Free up CPU resources 19 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

  20. Thread Synchronization Overhead: Challenge  Master/Worker Architecture  Master coordinates simulation progress p g  Workers do actual processing future event set  Synchronization involves y  Workers wait for incoming jobs event scheduler  Access to shared data structures  Straightforward Implementation work queue  Locks condition variables  Locks, condition variables  Workers pull jobs from work queue  If lock occupied or no job available  If lock occupied or no job available  Suspend thread  Free-up CPU resources Free up CPU resources 20 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

  21. Thread Synchronization Overhead: Challenge  Master/Worker Architecture  Master coordinates simulation progress p g  Workers do actual processing future event set  Synchronization involves y  Workers wait for incoming jobs event scheduler  Access to shared data structures  Straightforward Implementation work queue  Locks condition variables  Locks, condition variables worker worker worker  Workers pull jobs from work queue  If lock occupied or no job available  If lock occupied or no job available  Suspend thread  Free-up CPU resources Free up CPU resources 21 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

  22. Thread Synchronization Overhead: Challenge  Master/Worker Architecture  Master coordinates simulation progress p g  Workers do actual processing future event set  Synchronization involves y  Workers wait for incoming jobs event scheduler  Access to shared data structures  Straightforward Implementation work queue  Locks condition variables  Locks, condition variables worker worker worker  Workers pull jobs from work queue  If lock occupied or no job available  If lock occupied or no job available  Suspend thread  Free-up CPU resources Free up CPU resources 22 Communication and Georg Kunz, Mirko Stoffers, James Gross, Klaus Wehrle Distributed Systems

Recommend


More recommend