1
play

1 Design guidelines Moving data from disk to cache Protect file - PDF document

University of Michigan Medical Center Major tertiary care center for Michigan FY 1999: 1.2M visits, $1B billable services Zero-Interaction Authentication Results in large data volume and costs 5-8 pieces of paper/patient visit all records in


  1. University of Michigan Medical Center Major tertiary care center for Michigan FY 1999: 1.2M visits, $1B billable services Zero-Interaction Authentication Results in large data volume and costs 5-8 pieces of paper/patient visit all records in physical charts without (official) copying Mark Corner Obvious solution: electronic access to these records Brian Noble most patient records in a clinical data repository web-based front-end for easy access, CareWeb http://mobility.eecs.umich.edu/ CareWeb is not as useful as you might imagine requires aggressive authentication physicians are notoriously jealous of their time end-user perception drives acceptance: they don’t! Disconnected CareWeb Solution: constant but invisible authentication Experience with Coda suggested an obvious solution ZIA: zero-interaction authentication a laptop for every physician: disconnected CareWeb constantly ask user “are you there?” examine physician’s schedule for upcoming day have something other than user answer prefetch records for each scheduled patient Watch as authentication token: “yes, I’m right here” Demonstration for a number of UMHS staff members worn by user for increased physical security the physicians wanted it immediately enough computational power for small cryptographic tasks the IT staff told us not to show it to any more physicians secure communication via short-range wireless network Real costs if patient data is improperly revealed Design goals: HIPAA: $250K fines for disclosure/misuse of data protect laptop data from physical possession attacks Challenge: preserve performance and usability protect patient data give the user no reason to disable, work around without inconveniencing physicians Outline Threat Model Threat model Attacker can exploit physical possession use cached credentials Design console-based attacks how are files protected, shared? physical modification attacks (remove disk, probe memory) how do we improve performance? Attacker can exploit laptop-wireless link Implementation inspection, modification, insertion of messages Evaluation Things we don’t consider what overhead does ZIA add? network-based exploits (buffer overruns) are optimizations useful? jamming laptop-token link (DoS) can ZIA be hidden from users? replacing operating system untrustworthy users Related work rubber hose cryptanalysis Conclusion 1

  2. Design guidelines Moving data from disk to cache Protect file system data Tokens cannot decrypt file contents directly all data on disk encrypted small, battery-powered: limited computation ensure user is present for each decryption connected to laptop via wireless link latency comparable to disk, bandwidth much less Can’t contact token on every decryption adds (short) latency to (many) operations Instead, store file encrypting key on disk, itself encrypted key encrypting key never leaves token Take advantage of caching already used in file systems data on-disk: encrypted for safety data in cache: decrypted for performance token’s keys required for decrypting files File Key Take advantage of fact that people move slowly File only check “often enough” to notice user departure Token Laptop Moving data from disk to cache Moving data from disk to cache Tokens cannot decrypt file contents directly Tokens cannot decrypt file contents directly small, battery-powered: limited computation small, battery-powered: limited computation connected to laptop via wireless link connected to laptop via wireless link latency comparable to disk, bandwidth much less latency comparable to disk, bandwidth much less Instead, store file encrypting key on disk, itself encrypted Instead, store file encrypting key on disk, itself encrypted key encrypting key never leaves token key encrypting key never leaves token Key-Encrypting Key-Encrypting Key-Encrypting Key Key Key File Key File Key File Key Key-Encrypting Key File Key File Key File File Token Token Laptop Laptop Moving data from disk to cache Moving data from disk to cache Tokens cannot decrypt file contents directly Tokens cannot decrypt file contents directly small, battery-powered: limited computation small, battery-powered: limited computation connected to laptop via wireless link connected to laptop via wireless link latency comparable to disk, bandwidth much less latency comparable to disk, bandwidth much less Instead, store file encrypting key on disk, itself encrypted Instead, store file encrypting key on disk, itself encrypted key encrypting key never leaves token key encrypting key never leaves token Key-Encrypting Key-Encrypting Key-Encrypting Key-Encrypting Key Key Key Key File Key File Key File Key File Key Key-Encrypting Key-Encrypting Key Key File Key File Key File Key File Key File File Session Encryption Token Token Laptop Laptop 2

  3. Key-encrypting keys are capabilities Handle keys efficiently Key acquisition time can be expensive File encrypted by some key, E network round trip + processing time E is on disk, encrypted with another key, O many milliseconds O is known only to authentication token can’t add this to every disk operation! may also choose to escrow O as a matter of policy Two mechanisms mitigate this problem Sharing accommodated by additional encrypted versions of E overlap key acquisition with disk operations UNIX protection model: owner, group, and world cache decrypted keys, exploiting locality E encrypted by owner key O , group key G Neither mechanism helps with file creation each user’s token holds their O , and all applicable Gs is an asynchronous write: no overlap members of same group share copies of G is a new file: no cached key Can have per-machine world keys, too observation: you don’t need any particular key prefetch a stash of “fresh” keys Assign keys per directory Maintain performance, retain correctness What is the right granularity for file keys? Optimizations reduce laptop/token interactions small grain limits damage of key exposure but, still need to ask “are you there?” frequently! large grain increases effectiveness of caching Add periodic polling We chose per-directory keys to exploit access patterns exchange encrypted nonces: challenge/response files in same directory tend to be used together once per second, because people are slow acquisition time amortized across a directory When user is away, protect file system data Directory keys stored in the directory they encrypt must be fast enough to foil theft When user returns, restore machine to pre-departure state user should see no performance penalty on return Make protection fast and invisible Implementation Key question: what to do with cached data on departure? Implementation is split into two parts in-kernel file system support One alternative: flush on departure, read on arrival authentication system and token flush is fast: write dirty pages, bzero cache recovery is slow: read entire file cache from disk In-kernel support (Linux) provides cryptographic I/O Instead, we encrypt on departure, decrypt on arrival manages keys protection is a bit slower, but fast enough polls for token recovery is much faster : no disk operations Authentication system This retains current file cache behavior client running in user-space on the user’s laptop unused file blocks can be flushed when idle server running on token (Linux or WinCE) encrypted file blocks are treated identically communicate via a secure channel 3

  4. Implementation Evaluation overview Several important questions what overhead does ZIA impose? VFS how long does it take to secure the cache? Authentication Authentication how long does it take to restore the cache? Client Server Page Cache Prototype System Token ZIA Key Cache client system: IBM Thinkpad 570 token: Compaq iPAQ 3650 Implemented in-kernel as a Underlying FS connected by 802.11 network in 1Mb/s mode stackable file system Uses FiST toolkit (Columbia) Laptop Rijndael used for encryption Disk Evaluation: Andrew Benchmark Modified Andrew Benchmark results Determine file system overhead File System Time, sec Overhead Modified Andrew Benchmark (vs. Ext2fs) copy and compile Apache source code Ext2fs 52.63 (0.30) - 7.4 MB source only Base 52.76 (0.22) 0.24% 9.7 MB source plus objects Cryptfs 57.52 (0.18) 9.28% Compare ZIA against three file systems ZIA 57.54 (0.20) 9.32% Ext2fs: file system “at the bottom” Base: null stacking layer implemented in FiST Cryptfs: FiST’s cryptographic file system (+Rijndael) ZIA is indistinguishable from Cryptfs Benefit of optimizations Stress tests Andrew benchmark obligatory, but not necessarily good Turn off prefetching, caching to see how useful they are often measures the speed of your compiler Ext2fs 52.63 (0.30) - Three benchmarks stress high-overhead operations 1) create many directories ZIA 57.54 (0.20) 9.32% 2) scan those directories No prefetching 232.04 (3.40) 340.86% 3) bulk copy: 40MB Pine source No caching optimizations are critical 4

Recommend


More recommend