RCUArray: An RCU-like Parallel-Safe Distributed Resizable Array By Louis Jenkins
The Problem Parallel-Safe Resizing • Not inherently thread-safe to access memory while it is being resized • Memory has to be ‘moved’ from the smaller storage into larger storage
The Problem Parallel-Safe Resizing • Not inherently thread-safe to access memory while it is being resized • Memory has to be ‘moved’ from the smaller storage into larger storage • Concurrent loads and stores can result in undefined behavior • Stores after memory is moved can be lost entirely Load Store
The Problem Parallel-Safe Resizing • Not inherently thread-safe to access memory while it is being resized • Memory has to be ‘moved’ from the smaller storage into larger storage • Concurrent loads and stores can result in undefined behavior • Stores after memory is moved can be lost entirely • Loads and Stores after the smaller storage is reclaimed can produce undefined behavior Load Store
The Problem Parallel-Safe Resizing • Not inherently thread-safe to access memory while it is being resized • Memory has to be ‘moved’ from the smaller storage into larger storage • Concurrent loads and stores can result in undefined behavior • Stores after memory is moved can be lost entirely • Loads and Stores after the smaller storage is reclaimed can produce undefined behavior • Why not just synchronize access? • Not scalable
The Problem Parallel-Safe Resizing • Not inherently thread-safe to access memory while it is being resized • Memory has to be ‘moved’ from the smaller storage into larger storage • Concurrent loads and stores can result in undefined behavior • Stores after memory is moved can be lost entirely • Loads and Stores after the smaller storage is reclaimed can produce undefined behavior • Why not just synchronize access? • Not scalable • What do we need?
The Problem Parallel-Safe Resizing • Not inherently thread-safe to access memory while it is being resized • Memory has to be ‘moved’ from the smaller storage into larger storage • Concurrent loads and stores can result in undefined behavior • Stores after memory is moved can be lost entirely • Loads and Stores after the smaller storage is reclaimed can produce undefined behavior • Why not just synchronize access? • Not scalable Load • What do we need? 1. Allow concurrent access to both smaller and larger storage Store
The Problem Parallel-Safe Resizing • Not inherently thread-safe to access memory while it is being resized • Memory has to be ‘moved’ from the smaller storage into larger storage • Concurrent loads and stores can result in undefined behavior • Stores after memory is moved can be lost entirely • Loads and Stores after the smaller storage is reclaimed can produce undefined behavior • Why not just synchronize access? • Not scalable Load • What do we need? 1. Allow concurrent access to both smaller and larger storage 2. Ensure safe memory management of smaller storage Store
The Problem Parallel-Safe Resizing • Not inherently thread-safe to access memory while it is being resized • Memory has to be ‘moved’ from the smaller storage into larger storage • Concurrent loads and stores can result in undefined behavior • Stores after memory is moved can be lost entirely • Loads and Stores after the smaller storage is reclaimed can produce undefined behavior • Why not just synchronize access? • Not scalable • What do we need? Load 1. Allow concurrent access to both smaller and larger storage 2. Ensure safe memory management of smaller storage 3. Ensure that stores to old memory are visible in larger storage Store
Read-Copy-Update (RCU) • Synchronization strategy that favors performance of readers over writers • Read the current snapshot 𝑡 P 𝑇 = 𝑐 1
Read-Copy-Update (RCU) • Synchronization strategy that favors performance of readers over writers • Read the current snapshot 𝑡 • Copy 𝑡 to create 𝑡 ′ P 𝑇 ′ = 𝑐 1 𝑇 = 𝑐 1
Read-Copy-Update (RCU) • Synchronization strategy that favors performance of readers over writers • Read the current snapshot 𝑡 • Copy 𝑡 to create 𝑡 ′ P • Update applied to s ′ … 𝑇 ′ = 𝑐 1 , 𝑐 2 𝑇 = 𝑐 1
Read-Copy-Update (RCU) • Synchronization strategy that favors performance of readers over writers • Read the current snapshot 𝑡 • Copy 𝑡 to create 𝑡 ′ • Update applied to s ′ , 𝑡 ′ becomes new current snapshot P 𝑇 ′ = 𝑐 1 , 𝑐 2 𝑇 = 𝑐 1
Read-Copy-Update (RCU) • Synchronization strategy that favors performance of readers over writers • Read the current snapshot 𝑡 • Copy 𝑡 to create 𝑡 ′ • Update applied to s ′ , 𝑡 ′ becomes new current snapshot • Not always applicable in all situations • Must be safe to access at least two different snapshots of the same data Reader 𝑇 ′ = 𝑐 1 , 𝑐 2 𝑇 = 𝑐 1 Reader
Read-Copy-Update (RCU) • Synchronization strategy that favors performance of readers over writers • Read the current snapshot 𝑡 • Copy 𝑡 to create 𝑡 ′ • Update applied to s ′ , 𝑡 ′ becomes new current snapshot • Not always applicable in all situations • Must be safe to access at least two different snapshots of the same data Read-Copy-Update Reader-Writer Locks • Readers Concurrent with Readers • Readers Concurrent With Readers
Read-Copy-Update (RCU) • Synchronization strategy that favors performance of readers over writers • Read the current snapshot 𝑡 • Copy 𝑡 to create 𝑡 ′ • Update applied to s ′ , 𝑡 ′ becomes new current snapshot • Not always applicable in all situations • Must be safe to access at least two different snapshots of the same data Read-Copy-Update Reader-Writer Locks • Readers Concurrent with Readers • Readers Concurrent With Readers • Writers Mutually Exclusive with Writers • Writers Mutually Exclusive with Writers
Read-Copy-Update (RCU) • Synchronization strategy that favors performance of readers over writers • Read the current snapshot 𝑡 • Copy 𝑡 to create 𝑡 ′ • Update applied to s ′ , 𝑡 ′ becomes new current snapshot • Not always applicable in all situations • Must be safe to access at least two different snapshots of the same data Read-Copy-Update Reader-Writer Locks • Readers Concurrent with Readers • Readers Concurrent With Readers • Writers Mutually Exclusive with Writers • Writers Mutually Exclusive with Writers • Readers Concurrent with Writers • Readers Mutually Exclusive with Writers
Distributed RCU • Privatization and Snapshots Locale #1 Locale #0 • Each node in the cluster has its own local snapshot P P 𝑇 = 𝑐 1 𝑇 = 𝑐 1 P P 𝑇 = 𝑐 1 𝑇 = 𝑐 1 Locale #2 Locale #3
Distributed RCU • Privatization and Snapshots Locale #1 Locale #0 • Each node in the cluster has its own local snapshot • All local snapshots point to the same block P P 𝑇 = 𝑐 1 𝑇 = 𝑐 1 𝑐 1 P P 𝑇 = 𝑐 1 𝑇 = 𝑐 1 Locale #2 Locale #3
Distributed RCU • Privatization and Snapshots Locale #1 Locale #0 • Each node in the cluster has its own local snapshot • All local snapshots point to the same block P P Reader Reader • Reader Concurrency • Readers will read from local snapshot only 𝑇 = 𝑐 1 𝑇 = 𝑐 1 • All readers regardless of node will see same block • All stores to 𝑐 1 are seen by any snapshot or node 𝑐 1 P P Reader Reader 𝑇 = 𝑐 1 𝑇 = 𝑐 1 Locale #2 Locale #3
Distributed RCU • Privatization and Snapshots Locale #1 Locale #0 • Each node in the cluster has its own local snapshot • All local snapshots point to the same block P P Reader Reader • Reader Concurrency • Readers will read from local snapshot only 𝑇 = 𝑐 1 𝑇 = 𝑐 1 • All readers regardless of node will see same block • All stores to 𝑐 1 are seen by any snapshot or node 𝑐 1 • Writer Mutual Exclusion • Use a distributed lock P P Reader Reader 𝑇 = 𝑐 1 𝑇 = 𝑐 1 Locale #2 Locale #3
Distributed RCU • Privatization and Snapshots Locale #1 Locale #0 • Each node in the cluster has its own local snapshot • All local snapshots point to the same block P P Reader Reader • Reader Concurrency • Readers will read from local snapshot only 𝑇 ′ = 𝑐 1 , 𝑐 2 𝑇 ′ = 𝑐 1 , 𝑐 2 • All readers regardless of node will see same block • All stores to 𝑐 1 are seen by any snapshot or node 𝑐 1 • Writer Mutual Exclusion 𝑐 2 • Use a distributed lock P • Perform each update local to each node P Reader Reader 𝑇 ′ = 𝑐 1 , 𝑐 2 𝑇′ = 𝑐 1 , 𝑐 2 Locale #2 Locale #3
Recommend
More recommend