On the Benefits of Being Optimistic and Relaxed Petr Kuznetsov - - PowerPoint PPT Presentation

on the benefits of being optimistic and relaxed
SMART_READER_LITE
LIVE PREVIEW

On the Benefits of Being Optimistic and Relaxed Petr Kuznetsov - - PowerPoint PPT Presentation

On the Benefits of Being Optimistic and Relaxed Petr Kuznetsov INFRES, Tlcom ParisTech Joint work with Srivatsan Ravi (TU Berlin) and Vincent Gramoli (U Sydney) What is computing? 2 3 4 5 6 7 8 Amdahl s Law p fraction


slide-1
SLIDE 1

On the Benefits of Being 
 Optimistic and Relaxed

Petr Kuznetsov

INFRES, Télécom ParisTech

Joint work with Srivatsan Ravi (TU Berlin) and Vincent Gramoli (U Sydney)

slide-2
SLIDE 2

2

What is computing?

slide-3
SLIDE 3

3

slide-4
SLIDE 4

4

slide-5
SLIDE 5

5

slide-6
SLIDE 6

6

slide-7
SLIDE 7

7

slide-8
SLIDE 8

8

slide-9
SLIDE 9

9

Amdahlʼs Law

  • p – fraction of the work that can be done in parallel

(no synchronization), 1-p for synchronization

  • n - the number of processors

For n=9, p=9/10, S=5! S<9, regardless of n!

Minimizing synchronization costs is crucial!

slide-10
SLIDE 10

10

But…

  • Concurrent programming is hard

 A new algorithm worth a PhD (or a paper at least)

  • Sequential programming is “easy”

 e.g., for data structures (queues, trees, skip lists, hash tables,…)

  • What about a “wrapper” that allows for

running sequential operations concurrently?

 Let the wrapper care about conflicts

  • How? Locks, transactional memory…
slide-11
SLIDE 11

11

Our contribution

  • What it means to share a sequential program

 Locally serializable (LS) linearizability

  • What it means for sharing to be efficient

 Relative concurrency of different synchronization techniques (e.g., locks vs. TMs)

  • What are the benefits of being relaxed and optimistic

 Type-specific (relaxed) consistency and transactional (optimistic) concurrency control supersede both

slide-12
SLIDE 12

12

Correctly sharing sequential code?

Given a sequential implementation P of a data structure type T, provide a concurrent one that:

  • Locally appears sequential – the user simply

runs the sequential code of P

 Local serializability

  • Globally makes sense – the high-level operations

give consistent global order wrt T

 Linearizability [HW90]

slide-13
SLIDE 13

13

Example: Integer Set

Type: Integer Set:

  • boolean insert(x)
  • boolean remove(x)
  • boolean contains(x)

Implemented sequentially as a sorted linked list:

… 2 5 7 9 h t

slide-14
SLIDE 14

14

Linearizable histories

p1 p2 p3

insert(3) true contains(1) true insert(1) true remove(1) true

The history is equivalent to a legal sequential history

  • n a set (real-time order preserved)
slide-15
SLIDE 15

15

Linked‐list for Integer Set:

sequenQal implementaQon

slide-16
SLIDE 16

16

As is?

Not LS-linearizable: locally serializable, but not linearizable…

2 3 H T

Insert(3) Insert(5)

5

The update is lost!

slide-17
SLIDE 17

17

p1 p2 p3

insert(3) ok Contains(3) false insert(2) ok insert(5) ok

?

  • protect data items (critical sections),
  • provide roll-backs (transactions)
slide-18
SLIDE 18

18

Locking schemes for a linked‐list

Coarse‐grained locking

2‐phase locking

Hand‐over‐hand locking: relaxed for specific data structures

slide-19
SLIDE 19

19

OpQmisQc wrapper for a linked‐list

Plenty of STMs exist:

  • Opaque
  • Strictly serializable
  • Elastic

startTxn tryCommit startTxn tryCommit startTxn tryCommit

slide-20
SLIDE 20

20

What about efficiency?

“Amount of concurrency”: the sets of accepted schedules (orderings of sequential steps)

 Each (correct) implementation accepts a subset of schedules  The more schedules are accepted the better

Which technique provides most concurrency?

slide-21
SLIDE 21

21

Relaxation vs. Optimism

PL: deadlock-free (fine-grained) lock-based M: strongly consistent (serializable) TMs R: relaxed (data-type-aware) TMs

PL M R

slide-22
SLIDE 22

22

slide-23
SLIDE 23

23

  • Accepted by hand‐over‐hand

 p1 and p3 are consistent with different serializaQons

1 3 H T 4

p1 p2 p

3

Accepted by locks

slide-24
SLIDE 24

24

  • T1‐>T2 (T1 read X1 before T2 updated it)
  • T2‐>T3 (T3 sees the effect of T2)
  • T3‐> T1 (T1 sees the effect T3)

T1 T2 T

3

But not serializable!

1 3 H T 4

slide-25
SLIDE 25

25

  • No writes: accepted

by M

  • Both ops are about to

write: rejected by M (at least one txn aborts)

  • But must be accepted by

PL too!

slide-26
SLIDE 26

26

  • R accepts every observable correct schedule of

(LL,set)

  • R accepts the linearizable but not serializable

schedules of non-conflicting updates (1)

  • R accepts the potentially conflicting-update

schedule (2)

(1) (2)

slide-27
SLIDE 27

27

Results and implications

  • What is a correct concurrent wrapper?

 LS-linearizability

  • How to measure relative efficiency of

concurrent wrappers?

 Accepted schedule sets

  • Benefits of relaxation and optimism formally

captured

  • A language to reason about the “best”

synchronization technique, the “most suitable” data structure

slide-28
SLIDE 28

28

Open questions

  • Extend the results to more general classes of

types, data structures

  • Workload analysis: which schedules are

relevant?

  • What concurrency tells us? The cost of

concurrency? Details: http://arxiv.org/abs/1203.4751

slide-29
SLIDE 29

29

Distributed ≠ Parallel

  • The main challenge is efficient and robust

synchronization

  • ʻʻyou know you have a distributed system

when the crash of a computer youʼve never heard of stops you from getting any work done” (Lamport)

slide-30
SLIDE 30

30

Merci beaucoup!