quantitative reasoning for proving lock freedom
play

Quantitative Reasoning for Proving Lock-Freedom Jan Ho ff mann, - PowerPoint PPT Presentation

Quantitative Reasoning for Proving Lock-Freedom Jan Ho ff mann, Michael Marmar, and Zhong Shao Quantitative Reasoning for Proving Lock-Freedom Jan Ho ff mann, Michael Marmar, and Zhong Shao Mike is at LICS too. Quantitative Reasoning for


  1. Quantitative Reasoning for Proving Lock-Freedom Jan Ho ff mann, Michael Marmar, and Zhong Shao

  2. Quantitative Reasoning for Proving Lock-Freedom Jan Ho ff mann, Michael Marmar, and Zhong Shao Mike is at LICS too.

  3. Quantitative Reasoning for Proving Lock-Freedom Jan Ho ff mann, Michael Marmar, and Zhong Shao

  4. Threads Shared Memory Concurrent Multiprocessor OS Kernel Data Structures

  5. Threads 8128 2 77 1 Shared Memory Concurrent Multiprocessor OS Kernel Data Structures

  6. Threads 2 77 1 Shared Memory Concurrent Multiprocessor OS Kernel Data Structures

  7. Threads 2012 666 2 77 1 Shared Memory Concurrent Multiprocessor OS Kernel Data Structures

  8. Threads 2 77 1 Shared Memory Concurrent Multiprocessor OS Kernel Data Structures

  9. Threads Need synchronization to avoid race conditions. 2 77 1 Shared Memory Concurrent Multiprocessor OS Kernel Data Structures

  10. Non-Blocking Synchronization

  11. Non-Blocking Synchronization • Classical Synchronization: locks ensure mutual exclusion of threads ➡ Performance issues on modern multiprocessor architectures • Blocking (busy waiting) • Cache-coherency (high memory contention)

  12. Non-Blocking Synchronization • Classical Synchronization: locks ensure mutual exclusion of threads ➡ Performance issues on modern multiprocessor architectures • Blocking (busy waiting) • Cache-coherency (high memory contention) • Non-Blocking Synchronization: shared data is accessed without locks ➡ Outperforms lock-based synchronization in many scenarios • Interference of threads possible • Need to ensure consistency of the data structure

  13. How to Ensure Consistency Without Locks?

  14. How to Ensure Consistency Without Locks? • Attempt to perform an operation • Repeat operations after interference has been detected

  15. How to Ensure Consistency Without Locks? Optimistic • Attempt to perform an operation synchronization. • Repeat operations after interference has been detected

  16. How to Ensure Consistency Without Locks? Optimistic • Attempt to perform an operation synchronization. • Repeat operations after interference has been detected • Ensure that a concurrent execution is equivalent to some sequential execution • Desired properties: linearizability or serializability ‣ Di ff erent program logics exist, e.g., in [Fu et al. 2010] ‣ Contextual refinement, e.g., in [Liang et al. 2013]

  17. How to Ensure Consistency Without Locks? Optimistic • Attempt to perform an operation synchronization. • Repeat operations after interference has been detected • Ensure that a concurrent execution is equivalent to some sequential execution Sequential consistency. • Desired properties: linearizability or serializability ‣ Di ff erent program logics exist, e.g., in [Fu et al. 2010] ‣ Contextual refinement, e.g., in [Liang et al. 2013]

  18. How to Ensure Consistency Without Locks? Optimistic • Attempt to perform an operation synchronization. • Repeat operations after interference has been detected • Ensure that a concurrent execution is equivalent to some sequential execution Sequential consistency. • Desired properties: linearizability or serializability ‣ Di ff erent program logics exist, e.g., in [Fu et al. 2010] ‣ Contextual refinement, e.g., in [Liang et al. 2013] • But: We also need additional progress guarantees.

  19. Threads 8128 2 77 1 Shared Memory Sequential Consistency Livelocks is Not Enough

  20. Repeat Repeat operation operation Threads Interference 8128 2 77 1 Shared Memory Sequential Consistency Livelocks is Not Enough

  21. Repeat Repeat operation operation Threads Data structure is consistent but system Interference is stuck (livelock). 8128 2 77 1 Shared Memory Sequential Consistency Livelocks is Not Enough

  22. Progress Properties Let be a shared-memory data structure with operations D π 1 , ... , π k • Assume a system with m threads that access exclusively via the D operations π 1 , ... , π k • Assume that all code outside the data structure operations π i terminates • Fix an arbitrary scheduling of the m threads in which one or more operations have been started π i

  23. Progress Properties • A wait-free implementation guarantees that every thread can complete any started operation of the data structure in a finite number of steps • A lock-free implementation guarantees that some thread will complete an operation in a finite number of steps • An obstruction-free implementation guarantees the completion of an operation for any thread that eventually executes in isolation • Wait-freedom implies lock-freedom • Lock-freedom implies obstruction-freedom

  24. Progress Properties No livelocks and no starvation. • A wait-free implementation guarantees that every thread can complete any started operation of the data structure in a finite number of steps • A lock-free implementation guarantees that some thread will complete an operation in a finite number of steps • An obstruction-free implementation guarantees the completion of an operation for any thread that eventually executes in isolation • Wait-freedom implies lock-freedom • Lock-freedom implies obstruction-freedom

  25. Progress Properties No livelocks and no starvation. • A wait-free implementation guarantees that every thread can complete any started operation of the data structure in a finite number of steps • A lock-free implementation guarantees that some thread will complete an operation in a finite number of steps No livelocks. • An obstruction-free implementation guarantees the completion of an operation for any thread that eventually executes in isolation • Wait-freedom implies lock-freedom • Lock-freedom implies obstruction-freedom

  26. Progress Properties No livelocks and no starvation. • A wait-free implementation guarantees that every thread can complete any started operation of the data structure in a finite number of steps • A lock-free implementation guarantees that some thread will complete an operation in a finite number of steps No livelocks. • An obstruction-free implementation guarantees the completion of an operation for any thread that eventually executes in isolation Liang et al. • Wait-freedom implies lock-freedom Characterizing Progress Properties via Contextual • Lock-freedom implies obstruction-freedom Refinements. CONCUR’13.

  27. Our Results New quantitative technique to verify lock-freedom • Uses quant. compensation schemes to pay for possible interference • Enables local and modular reasoning • Our paper: Formalization based on concurrent separation logic [O’Hearn 2007] and quantitative separation logic [Atkey 2010] • Running example: Treiber’s non-blocking stack (a classic lock-free data structure)

  28. Sweet spot: strong progress Our Results guaranty and e ffi cient, elegant implementations. New quantitative technique to verify lock-freedom • Uses quant. compensation schemes to pay for possible interference • Enables local and modular reasoning • Our paper: Formalization based on concurrent separation logic [O’Hearn 2007] and quantitative separation logic [Atkey 2010] • Running example: Treiber’s non-blocking stack (a classic lock-free data structure)

  29. Sweet spot: strong progress Our Results guaranty and e ffi cient, elegant implementations. New quantitative technique to verify lock-freedom • Uses quant. compensation schemes to pay for possible interference Classically: temporal logic • Enables local and modular reasoning and whole program analysis. • Our paper: Formalization based on concurrent separation logic [O’Hearn 2007] and quantitative separation logic [Atkey 2010] • Running example: Treiber’s non-blocking stack (a classic lock-free data structure)

  30. Treiber’s Non-Blocking Stack struct Node { value_t data; Node *next; }; Node *S; void init() { S = NULL; }

  31. Treiber’s Non-Blocking Stack Stack is a struct Node { linked list. value_t data; Node *next; }; Node *S; void init() { S = NULL; }

  32. Treiber’s Non-Blocking Stack Stack is a struct Node { linked list. value_t data; Node *next; }; Shared pointer to the top element. Node *S; void init() { S = NULL; }

  33. Treiber’s Non-Blocking Stack struct Node { void push(value_t v) { value_t data; Node *t, *x; Node *next; x = new Node(); }; x->data = v; do { t = S; Node *S; x->next = t; } while(!CAS(&S,t,x)); void init() { S = NULL; } }

  34. Treiber’s Non-Blocking Stack struct Node { void push(value_t v) { value_t data; Node *t, *x; Prepare Node *next; x = new Node(); update. }; x->data = v; do { t = S; Node *S; x->next = t; } while(!CAS(&S,t,x)); void init() { S = NULL; } }

  35. Treiber’s Non-Blocking Stack struct Node { void push(value_t v) { value_t data; Node *t, *x; Prepare Node *next; x = new Node(); update. }; x->data = v; do { t = S; Node *S; x->next = t; } while(!CAS(&S,t,x)); void init() { S = NULL; } } Compare-and-swap operation: if the address &S contains t then write x into &S and return true; else return false

Recommend


More recommend