authenticated storage using small trusted hardware
play

Authenticated Storage Using Small Trusted Hardware Hsin-Jung Yang, - PowerPoint PPT Presentation

Authenticated Storage Using Small Trusted Hardware Hsin-Jung Yang, Victor Costan, Nickolai Zeldovich, and Srini Devadas Massachusetts Institute of Technology November 8th, CCSW 2013 Cloud Storage Model Cloud Storage Requirements Privacy


  1. Authenticated Storage Using Small Trusted Hardware Hsin-Jung Yang, Victor Costan, Nickolai Zeldovich, and Srini Devadas Massachusetts Institute of Technology November 8th, CCSW 2013

  2. Cloud Storage Model

  3. Cloud Storage Requirements • Privacy – Sol: encryption at the client side • Availability – Sol: appropriate data replication • Integrity – Sol: digital signatures & message authentication codes • Freshness – Hard to guarantee due to replay attacks

  4. Cloud Storage: Replay Attack User A Cloud Server User B

  5. Cloud Storage: Replay Attack User A Cloud Server User B

  6. Cloud Storage: Replay Attack User A Cloud Server User B

  7. Cloud Storage: Replay Attack User A Cloud Server User B

  8. Cloud Storage: Replay Attack User A Cloud Server User B

  9. Cloud Storage: Replay Attack User A Cloud Server User B

  10. Cloud Storage: Replay Attack User A Cloud Server User B

  11. Cloud Storage: Replay Attack User A Cloud Server User B

  12. Cloud Storage: Replay Attack User A Cloud Server User B

  13. Cloud Storage: Replay Attack User A Cloud Server User B

  14. Cloud Storage: Replay Attack Software solution: Two users contact with each other directly User A Cloud Server User B

  15. Solution: Adding Trusted Hardw are

  16. Solution: Adding Trusted Hardw are

  17. Solution: Adding Trusted Hardw are single chip

  18. Solution: Adding Trusted Hardw are single chip Secure NVRAM

  19. Solution: Adding Trusted Hardw are single chip Secure NVRAM Computational Engines

  20. Solution: Adding Trusted Hardw are single chip Secure NVRAM Computational Engines Slow under NVRAM process!

  21. Solution: Adding Trusted Hardw are state chip (S chip) Secure NVRAM Computational Engines processing chip (P chip)

  22. Solution: Adding Trusted Hardw are Smart Card state chip (S chip) Secure NVRAM Computational Engines processing chip (P chip) Fast! FPGA / ASIC

  23. Solution: Adding Trusted Hardw are Smart Card state chip (S chip) Secure NVRAM securely paired Computational Engines processing chip (P chip) Fast! FPGA / ASIC

  24. Outline • Motivation: Cloud Storage and Security Challenges • System Design – Threat Model & System Overview – Security Protocols – Crash Recovery Mechanism • Implementation • Evaluation • Conclusion

  25. Threat Model

  26. Threat Model • Untrusted connections • Disk attacks and hardware failures • Untrusted server that may (1) send wrong response (2) pretend to be a client (3) maliciously crash (4) disrupt P chip’s power • Clients may try to modify other’s data

  27. Threat Model • Untrusted connections • Disk attacks and hardware failures • Untrusted server that may (1) send wrong response (2) pretend to be a client (3) maliciously crash (4) disrupt P chip’s power • Clients may try to modify other’s data

  28. Threat Model • Untrusted connections • Disk attacks and hardware failures • Untrusted server that may (1) send wrong response (2) pretend to be a client (3) maliciously crash (4) disrupt P chip’s power • Clients may try to modify other’s data

  29. Threat Model • Untrusted connections • Disk attacks and hardware failures • Untrusted server that may (1) send wrong response (2) pretend to be a client (3) maliciously crash (4) disrupt P chip’s power • Clients may try to modify other’s data

  30. System Overview • Client <-> S-P chip: HMAC key • S-P chip: integrity/freshness checks, system state storage & updates sign responses • Server: communication, scheduling, disk IO

  31. Security Protocols • Message Authentication • Memory Authentication • Write Access Control • System State Protection against Power Loss

  32. Design: Message Authentication • Untrusted network between client and server – Sol: HMAC Technique • Session-based protocol (HMAC key) PubEK, Ecert ( PubEK, PrivEK ) {HMAC key} PubEK {HMAC key} PubEK decrypt the key Client Server S-P Chip pair HMAC key {HMAC key} PubEK HMAC key (encrypted HMAC key)

  33. Security Protocols • Message Authentication • Memory Authentication • Write Access Control • System State Protection against Power Loss

  34. Design: Memory Authentication • Data protection against untrusted disk • Block-based cloud storage API – Fixed block size (1MB) – Write (block number, block) – Read (block number)  block – Easy to reason about the security Disk B 1 B 2 B 3 B 4

  35. Design: Memory Authentication • Solution: Merkle tree h 1..8 h 1..4 h 5..8 h 12 h 34 h 56 h 78 h 12 =H(h 1 h 2 ) h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 1 =H(B 1 ) B 1 B 2 B 3 B 4 B 5 B 6 B 7 B 8 Disk is divided into many blocks

  36. Design: Memory Authentication • Solution: Merkle tree Root Hash h 1..8 (securely stored) h 1..4 h 5..8 h 12 h 34 h 56 h 78 h 12 =H(h 1 h 2 ) h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 1 =H(B 1 ) B 1 B 2 B 3 B 4 B 5 B 6 B 7 B 8 Disk is divided into many blocks

  37. Design: Memory Authentication • Solution: Merkle tree Root Hash h 1..8 (securely stored) h 1..4 h 5..8 h 12 h 34 h 56 h 78 h 12 =H(h 1 h 2 ) verify h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 1 =H(B 1 ) B 1 B 2 B 3 B 4 B 5 B 6 B 7 B 8 Disk is divided into many blocks

  38. Design: Memory Authentication • Solution: Merkle tree Root Hash h 1..8 (securely stored) h 1..4 h 5..8 h 12 h 34 h 56 h 78 h 12 =H(h 1 h 2 ) verify h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 1 =H(B 1 ) B 1 B 2 B 3 B 4 B 5 B 6 B 7 B 8 Disk is divided into many blocks

  39. Merkle Tree Caching • Caching policy is controlled by the server Cache management commands: LOAD, VERIFY, UPDATE P chip Node # Hash Verified Left child Right child 1 fabe3c05d8ba995af93e Y Y N 2 e6fc9bc13d624ace2394 Y Y Y 4 53a81fc2dcc53e4da819 Y N N 5 b2ce548dfa2f91d83ec6 Y N N

  40. Security Protocols • Message Authentication • Memory Authentication • Write Access Control • System State Protection against Power Loss

  41. Design: Write Access Control • Goal: to ensure all writes are authorized and fresh • Coherence model assumption: – Clients should be aware of the latest update • Unique write access key ( Wkey ) – Share between authorized writers and the S-P chip Wkey B 1 B 2 B 3 B 4 S-P Chip pair • Revision number ( V id ) – Increase during each write operation

  42. Design: Write Access Control • Protect Wkey and V id – Add another layer at the bottom of Merkle tree h 78 h 78 h' 8 h 8 h 8 Root Hash h 8 =H(B 8 ) h 8 =H(B 8 ) h 1..8 (securely stored) h 1..4 h 5..8 B 8 H ( Wkey ) V id B 8 h 12 h 34 h 56 h 78 h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 B 1 B 2 B 3 B 4 B 5 B 6 B 7 B 8

  43. Security Protocols • Message Authentication • Memory Authentication • Write Access Control • System State Protection against Power Loss

  44. Design: System State Protection • Goal: to avoid losing the latest system state – Server may interrupt the P chip’s supply power • Solution: root hash storage protocol Client Server P chip S chip hold request response store state release response

  45. Design: Crash Recovery Mechanism • Goal: to recover the system from crashes – Even if the server crashes, the disk can be recovered to be consistent with the root hash stored on the S chip • Solution:

  46. Implementation • ABS (authenticated block storage) server architecture

  47. Implementation • ABS client model

  48. Performance Evaluation • Experiment configuration – Disk size: 1TB – Block size: 1MB – Server: Intel Core i7-980X 3.33GHz 6-core processor with 12GB of DDR3-1333 RAM – FPGA: Xilinx Virtex-5 XC5VLX110T – Client: Intel Core i7-920X 2.67GHz 4-core processor – FPGA-server connection: Gigabit Ethernet – Client-server connection: Gigabit Ethernet

  49. File System Benchmarks (Mathmatica) • Fast network: – Latency: 0.2ms – Bandwidth: 1Gbit/s pure writes pure reads reads + writes

  50. File System Benchmarks (Mathmatica) • Slow network: – Latency: 30.2ms – Bandwidth: 100Mbit/s pure writes pure reads reads + writes

  51. File System Benchmarks (Modified Andrew Benchmark) • Slow network: – Latency: 30.2ms – Bandwidth: 100Mbit/s

  52. Customized Solutions • Hardware requirements Demand Focused Performance Budget Connection PCIe x16 (P) / USB (S) USB Hash Engine 8 + 1 (Merkle) 0 + 1 (Merkle) Tree Cache large none Response Buffer 2 KB 300 B • Estimated performance Demand Focused Performance Budget Throughput 2.4 GB/s 377 MB/s Randomly Write Latency 12.3 ms + 32 ms 2.7 ms + 32 ms Throughput 2.4 GB/s Randomly Read Latency 0.4 ms # HDDs supported 24 4

  53. Customized Solutions • Hardware requirements Single chip! Demand Focused Performance Budget Connection PCIe x16 (P) / USB (S) USB Hash Engine 8 + 1 (Merkle) 0 + 1 (Merkle) Tree Cache large none Response Buffer 2 KB 300 B • Estimated performance Demand Focused Performance Budget Throughput 2.4 GB/s 377 MB/s Randomly Write Latency 12.3 ms + 32 ms 2.7 ms + 32 ms Throughput 2.4 GB/s Randomly Read Latency 0.4 ms # HDDs supported 24 4

Recommend


More recommend