Leakage-Resilient Cryptography with Key Derived from Sensitive Data Konrad Durnoga, Stefan Dziembowski, Tomasz Kazana, Micha ł Zaj ą c , Maciej Zdanowicz Estonian Computer Science Theory Days Jõeküla 2-4.10.2015
Computers can be infected by a mallware installs a virus retrieve some data retrieve = leak The virus can: Can we run any • take control over the machine crypto on such a machine? • steal some secrets stored on the machine your secret key
What if a virus can download the whole data stored on a machine? We are HOPELESS - virus can just make a copy of the machine Let assume she cannot!
no virus installs a virus retrieve some data no virus installs a virus retrieve some data no virus installs a virus retrieve some data
When the virus controlls the machine nothing can be done thus… We care only about the periods when machine is virus-free
no virus installs a virus retrieve some data no virus installs a virus retrieve some data no virus installs a virus retrieve some data This is called Bounded Retrieval Model (BRM)
Bounded Retrieval Model installs a virus retrieve some data Idea : Make secret data so big , that no adversary can retrieve it at whole. But… how big should it be? Considering modern Internet connection speeds, we should think about secrets a few GB long.
How to work with huge secrets? Stupid idea : Let’s use RSA/ElGamal scheme with such a long key Problem: Reading a few GB of data into memory takes a lot of time (doing mathematical operations takes even more…) need to find a way around… Eg: use some random bits from the key, not the whole key
But this is still such a waste of space !
BRM is not very useful on mobile devices , because of its huge space requirements.
Idea : use data already stored on a device
Problem : this data is not random
How to measure randomness? Disk data is a random variable, and there is sth called entropy Bad idea: use Shannon entropy by example (1): Let Enc be an encryption algorithm with the following key distribution: k = 000…0 with prob. ½ k randomly sampled from {0,1} n \ 0 n This cannot be secure, because we can guess k
How to measure randomness? Disk data is a random variable, and there is sth called entropy Bad idea: use Shannon entropy by example (2): this output has a huge probability of occurence but because other outputs are have very small probability, the entropy of this variable is still quite big
Better idea: min-entropy by definition: H ∞ ( X ) = − log max P ( X = x ) x by example: what is the probability of the most probable outcome? the most probable outcome occurs with probability 2 -k thus… ! H ∞ = k
This is disk… part of big min-entropy part of small min-entropy
Idea transform this: into this: using some smart bluring function
Tool 1: Random Oracle x ∈ {0,1}* H( x ) a random from {0,1}^n You can tell nothing about H -1 (x) from H(x) On query x random oracle answers with random value H(x) For x = y the answer is the same, H(x) = H(y) (the answer doesn’t change for the same query)
Tool 2: disperser graph Say G is a δ -regular biparite graph with ℓ nodes on the left and on the right. We say that G is (k, δ , ɛ ) left disperser if for any set of at least k left vertices is connected to at least (1 - ε ) ℓ vertices on the right D 0 D 0 D 0 D 0 D 0 D 0 1 2 3 ` ° 2 ` ° 1 ` G ... D 1 D 2 D 3 D ` ° 2 D ` ° 1 D ` Every big enough set on the left is connected to almost all vertices on the right
The bluring function* Key Derivation Procedure — kdp H - random oracle from {0,1} dn to {0,1} n D - block of length n H(D i1 , D i2, …, D id ) = D’ j D’ j D i1 D i2 D i3 D id
The result key derivation function, which is private ( Output ( A ( )) , ) (key from kdp ) ≈ ε ( Output ( S ( A )) , ) . We can construct a simulator S such that no distinguisher D can differentiate between simulator and adversary output, even if D sees private data
Privacy, idea of a proof Bad query : say q= b 1 ,…, b d is a query to a random oracle. We say that q is bad iff q = D i(1) ,…D i(d) . D 0 D 0 D 0 D 0 D 0 D 0 1 2 3 ` ° 2 ` ° 1 ` G ... D 1 D 2 D 3 D ` ° 2 D ` ° 1 D ` G is ( ℓ e , d, ɛ ) left disperser One-wayness of Disperser : Let A be an adversary with leakage (from D and D’ ) λ and r queries to a random oracle, then probability that A submits at least ℓ e bad queries is negligible.
Privacy, idea of a proof ( Output ( A ( )) , ) ≈ ε ( Output ( S ( A )) , ) . (key from kdp ) Privacy : We construct simulator S such that S simulates perfectly, unless A makes at least ℓ e bad queries, but this happens only with negligible probability.
The result key derivation function, which is secure If adversary A breaks a security of a BRM protocol with probability at most ɛ then it breaks the same protocol with a key delivered by kdp with probability: ɛ + Probability that A makes at least ℓ e bad queries
Security, idea of a proof secure If adversary A breaks a security of a protocol depending on (key obtained by a key derivation procedure) she can break a security of a protocol depending on (uniformly random key)
The result Almost every BRM protocol can be made space efficient
Authetication Merkle tree for a, b - children of c, value(c) = H(value(a), value(b))
Authetication Merkle tree H( , ) = for a, b - children of c, value(c) = H(value(a), value(b))
Authetication Prover P wants to authenticate to verifier V value(root) — publicly known V sends a (fresh) random k P responds with a path from k -th leave up to the root V checks whether the path is correct. In such a case accepts , otherwise rejects
Putting things together 3 on challenge k sent a path from k -th leave up to the root 2 compute the value at the root and make it public 1 using key derivation procedure blur disk data
Time vs space tradeoff To make authentication faster we can remember some nodes of a Merkle tree
Thank you
Recommend
More recommend