Example: ElGamal encryption G is a cyclic group of order q g is a generator of G def ← Z q ; return ( x , g x ) KG ( η ) x = $ ← Z q ; return ( g y , α y × m ) def E ( α, m ) y = $ def return ζ × β − x D ( x , ( β, ζ )) = Correctness: ( g xy × m ) × g − xy 12/59
Example: ElGamal encryption G is a cyclic group of order q g is a generator of G def ← Z q ; return ( x , g x ) KG ( η ) x = $ ← Z q ; return ( g y , α y × m ) def E ( α, m ) y = $ def return ζ × β − x D ( x , ( β, ζ )) = Correctness: m 12/59
Security of Public-Key Encryption Let ( KG , E , D ) be an encryption scheme and A a probabilistic polynomial-time adversary. Game INDCPA : ( sk , pk ) ← KG (); ( m 0 , m 1 ) ← A ( pk ); b ← { 0 , 1 } ; $ c ← E ( pk , m b ); ˜ b ← A ( pk , c ); return ˜ b = b � b = b ] − 1 � Adv A � Pr [INDCPA : ˜ � � def INDCPA ( η ) = � � 2 � The scheme is semantically (INDCPA) secure if Adv A INDCPA ( η ) is negligible 13/59
INDCPA-security of ElGamal Encryption ElGamal is secure under the Decision Diffie-Hellman assumption Game DDH 0 : Game DDH 1 : x , y ← Z q ; x , y , z ← Z q ; $ $ d ← B ( g x , g y , g xy ) d ← B ( g x , g y , g z ) Adv B def DDH ( η ) = | Pr [DDH 0 : d = 1] − Pr [DDH 1 : d = 1] | DDH Assumption For every PPT adversary B , Adv B DDH is negligible One can prove: ⇒ ∃B . PPT( B ) ∧ Adv A INDCPA = Adv B ∀A . PPT( A ) = DDH which implies, under the DDH assumption, that for any PPT adversary A , Adv A INDCPA is negligible 14/59
INDCPA-security of ElGamal Encryption ElGamal is secure under the Decision Diffie-Hellman assumption Game DDH 0 : Game DDH 1 : x , y ← Z q ; x , y , z ← Z q ; $ $ d ← B ( g x , g y , g xy ) d ← B ( g x , g y , g z ) Adv B def DDH ( η ) = | Pr [DDH 0 : d = 1] − Pr [DDH 1 : d = 1] | DDH Assumption For every PPT adversary B , Adv B DDH is negligible One can prove: ⇒ ∃B . PPT( B ) ∧ Adv A INDCPA = Adv B ∀A . PPT( A ) = DDH which implies, under the DDH assumption, that for any PPT adversary A , Adv A INDCPA is negligible 14/59
Things can still go wrong: RSA-OAEP Shoup Bellare, Hofheinz, Kiltz Bellare and Rogaway Pointcheval 1994 2001 2004 2009 Fujisaki, Okamoto, Pointcheval, Stern 1994 Purported proof of chosen-ciphertext security 2001 Proof is flawed, but can be patched ...for a weaker security notion, or ...for a modified scheme, or ...under stronger assumptions 2004 Filled gaps in Fujisaki et al. 2001 proof 2009 Security definition needs to be clarified 2011 Filled gaps and marginally improved bound in 2004 proof 15/59
Things can still go wrong: Identity-Based Encryption Shamir Boneh & Franklin Galindo 1984 2001 2002 2003 2004 2005 Gentry & Silverberg, Horwitz & Lynn, Al-Riyami & Peterson, Yao et al, Cheng & Comely 1984: Conception of Identity-Based Cryptography 2001: First practical provably-secure IBE scheme. 2002-2005: Extensively used as a buliding block 2005: Proof is flawed, but can be patched ...for a weaker security bound 16/59
Beyond Provable Security: Verifiable Security Goal Build a framework to formalize game-based cryptographic proofs Provide foundations to game-based proofs Notation as close as possible to cryptographer’s Automate common reasoning patterns Support exact security Provide independently and automatically verifiable proofs 17/59
CertiCrypt Language-based cryptographic proofs Formal certification of ElGamal encryption. A gentle introduction to CertiCrypt International Workshop on Formal Aspects in Security and Trust, FAST 2008 Formal certification of code-based cryptographic proofs ACM Symposium on Principles of Programming Languages, POPL 2009 Formally certifying the security of digital signature schemes IEEE Symposium on Security & Privacy, S&P 2009 Programming language techniques for cryptographic proofs International Conference on Interactive Theorem Proving, ITP 2010 18/59
Language-based game-playing proofs What if we represent games as programs? Games = ⇒ probabilistic programs ⇒ Game transformations = program transformations Adversaries = ⇒ unspecified procedures Efficiency = ⇒ Probabilistic Polynomial-Time 19/59
Language-based game-playing proofs What if we represent games as programs? Games = ⇒ probabilistic programs ⇒ Game transformations = program transformations Adversaries = ⇒ unspecified procedures Efficiency = ⇒ Probabilistic Polynomial-Time 19/59
Language-based game-playing proofs What if we represent games as programs? Games = ⇒ probabilistic programs ⇒ Game transformations = program transformations Adversaries = ⇒ unspecified procedures Efficiency = ⇒ Probabilistic Polynomial-Time 19/59
Language-based game-playing proofs What if we represent games as programs? Games = ⇒ probabilistic programs ⇒ Game transformations = program transformations Adversaries = ⇒ unspecified procedures Efficiency = ⇒ Probabilistic Polynomial-Time 19/59
Language-based game-playing proofs What if we represent games as programs? Games = ⇒ probabilistic programs ⇒ Game transformations = program transformations Adversaries = ⇒ unspecified procedures Efficiency = ⇒ Probabilistic Polynomial-Time 19/59
A language-based approach Security definitions, assumptions and games are formalized using a probabilistic programming language p While : C ::= skip nop | C ; C sequence | V ← E assignment | V ← DE random sampling $ | if E then C else C conditional | while E do C while loop | V ← P ( E , . . . , E ) procedure call x ← d : sample the value of x according to distribution d $ The language of expressions ( E ) and distribution expressions ( DE ) admits user-defined extensions 20/59
Some design choices CertiCrypt is built on top of the Coq proof assistant Deep-embedding formalization Strongly-typed language Syntax is dependently-typed (only well-typed programs are admitted) Monadic semantics uses Paulin-Mohring’s ALEA Coq library 21/59
Language-based Game-playing Proofs Deep-embedded in the language of Coq, using a dependently-typed syntax: Inductive I : Type := | Assign : ∀ t , V t → E t → I | Rand : ∀ t , V t → D t → I | Cond : E B → C → C → I | While : E B → C → I : ∀ l t , P ( l , t ) → V t → E ∗ | Call l → I where C := I ∗ Programs are well-typed by construction 22/59
Semantics Measure Monad Distributions represented as monotonic, linear and continuous functions of type def D ( A ) = ( A → [0 , 1]) → [0 , 1] Intuition: Given µ ∈ D ( A ) and f : A → [0 , 1] µ ( f ) represents the expected value of f w.r.t. µ def unit : A → D ( A ) = λ x . λ f . f x def bind : D ( A ) → ( A → D ( B )) → D ( B ) = λµ. λ F . λ f . µ ( λ x . F x f ) 23/59
Semantics Programs map an initial memory to a distribution on final memories � c ∈ C � : M → D ( M ) The probability of an event is the expected value of its characteristic function: def Pr [ c , m : A ] = � c � m 1 A Instrumented and parametrized semantics to characterize PPT: � c ∈ C � η : M → D ( M × N ) 24/59
Semantics Programs map an initial memory to a distribution on final memories � c ∈ C � : M → D ( M ) The probability of an event is the expected value of its characteristic function: def Pr [ c , m : A ] = � c � m 1 A Instrumented and parametrized semantics to characterize PPT: � c ∈ C � η : M → D ( M × N ) 24/59
Intuition behind semantics Think of � G � m as the expectation operator of the probability distribution induced by the game: � f ( m ′ ) Pr[ � c , m � ⇓ m ′ ] � G � m f = m ′ Computing probabilities: def Pr [ G , m : A ] � G � m 1 A = Example. def ← { 0 , 1 } ; y ← { 0 , 1 } Let G = x $ $ Pr [G , m : x � = y ] = � G � m 1 x � = y = 25/59
Intuition behind semantics Think of � G � m as the expectation operator of the probability distribution induced by the game: � f ( m ′ ) Pr[ � c , m � ⇓ m ′ ] � G � m f = m ′ Computing probabilities: def Pr [ G , m : A ] � G � m 1 A = Example. def ← { 0 , 1 } ; y ← { 0 , 1 } Let G = x $ $ Pr [G , m : x � = y ] = � G � m 1 x � = y = 1 1 4 1 x � = y ( m [ x �→ 0 , y �→ 0]) + 4 1 x � = y ( m [ x �→ 0 , y �→ 1]) + 1 1 4 1 x � = y ( m [ x �→ 1 , y �→ 0]) + 4 1 x � = y ( m [ x �→ 1 , y �→ 1]) 25/59
Intuition behind semantics Think of � G � m as the expectation operator of the probability distribution induced by the game: � f ( m ′ ) Pr[ � c , m � ⇓ m ′ ] � G � m f = m ′ Computing probabilities: def Pr [ G , m : A ] � G � m 1 A = Example. def ← { 0 , 1 } ; y ← { 0 , 1 } Let G = x $ $ Pr [G , m : x � = y ] = � G � m 1 x � = y = 1 0 + + 4 1 + 0 4 25/59
Intuition behind semantics Think of � G � m as the expectation operator of the probability distribution induced by the game: � f ( m ′ ) Pr[ � c , m � ⇓ m ′ ] � G � m f = m ′ Computing probabilities: def Pr [ G , m : A ] � G � m 1 A = Example. def ← { 0 , 1 } ; y ← { 0 , 1 } Let G = x $ $ Pr [G , m : x � = y ] = � G � m 1 x � = y = 1 2 25/59
Deductive reasoning about programs Art of proving that programs are correct Foundations: program logic (Hoare’69) and weakest precondition calculus (Floyd’67) Major advances in: language coverage (functions, objects, concurrency, heap) automation (decision procedures, SMT solvers, invariant generation) proof engineering (intermediate languages) 26/59
Hoare logic Judgments are of the form � c : P ⇒ Q (typically P and Q are f.o. formulae over program variables) A judgment � c : P ⇒ Q is valid iff for all states s and s ′ , if such that c , s ⇓ s ′ and s satisfies P then s ′ satisfies Q . Selected rules � c 1 : P ⇒ Q � c 2 : Q ⇒ R � x ← e : Q { x := e } ⇒ Q � c 1 ; c 2 : P ⇒ R � c 1 : P ∧ e = tt ⇒ Q � c 2 : P ∧ e = ff ⇒ Q � if e then c 1 else c 2 : P ⇒ Q � c : I ∧ e = tt ⇒ I P ⇒ I I ∧ e = ff ⇒ Q � while e do c : P ⇒ Q 27/59
Relational judgments Judgments are of the form � c 1 ∼ c 2 : P ⇒ Q (typically P and Q are f.o. formulae over tagged program variables of c 1 and c 2 ) A judgment � c 1 ∼ c 2 : P ⇒ Q is valid iff for all states s 1 , s ′ 1 , s 2 , s ′ 2 , if c 1 , s 1 ⇓ s ′ 1 and c 2 , s 2 ⇓ s ′ 2 and ( s 1 , s 2 ) satisfies P then ( s ′ 1 , s ′ 2 ) satisfies Q . May require co-termination. Verification methods Embedding into Hoare logic: Self-composition (B, D’Argenio and Rezk’04) Cross-products (Zaks and Pnueli’08, Barthe, Crespo and Kunz’11) Relational Hoare Logic (Benton’04) 28/59
Relational Hoare Logic Selected rules Ψ ′ ⇒ Ψ Φ ⇒ Φ ′ � c 1 ∼ c 2 : Ψ ⇒ Φ [ Sub ] � c 1 ∼ c 2 : Ψ ′ ⇒ Φ ′ � c 2 ∼ c 3 : Ψ ′ ⇒ Φ ′ � c 1 ∼ c 2 : Ψ ⇒ Φ [ Comp ] � c 1 ∼ c 3 : Ψ ◦ Ψ ′ ⇒ Φ ◦ Φ ′ 2 : Φ ′ ⇒ Φ � c 1 ∼ c ′ 1 : Ψ ⇒ Φ ′ � c 2 ∼ c ′ [ Seq ] � c 1 ; c 2 ∼ c ′ 1 ; c ′ 2 : Ψ ⇒ Φ � x ← e ∼ x ← e ′ : Φ { x � 1 � := e � 1 � , x � 2 � := e ′ � 2 �} ⇒ Φ [ Asn ] ⇒ e � 1 � ⇔ e ′ � 2 � Ψ = � c 1 ∼ c ′ � c 2 ∼ c ′ 1 : Ψ ∧ e � 1 � ⇒ Φ 2 : Ψ ∧ ¬ e � 1 � ⇒ Φ [ Cond ] � if e then c 1 else c 2 ∼ if e ′ then c ′ 1 else c ′ 2 : Ψ ⇒ Φ 29/59
Probabilistic Relational Hoare Logic Probabilistic extension of Benton’s Relational Hoare Logic Definition def � c 1 ∼ c 2 : Ψ ⇒ Φ = ∀ m 1 m 2 . m 1 Ψ m 2 = ⇒ � c 1 � m 1 ≃ Φ � c 2 � m 2 µ 1 ≃ Φ µ 2 lifts relation Φ from memories to distributions. µ 1 ≃ Φ µ 2 holds if there exists a distribution µ on M × M s.t. The 1 st projection of µ coincides with µ 1 The 2 nd projection of µ coincides with µ 2 Sets with positive measure are in Φ 30/59
A specialized rule for random assignments Let A be a finite set and let f , g : A → B . Define d = x ← A ; y ← f x $ d ′ = x ← A ; y ← g x $ Then d = d ′ iff there exists h : A 1 − 1 → A such that g = f ◦ h A rule for random assignments f is 1-1 Ψ = ⇒ ∀ v . Φ { x � 1 � := v , x � 2 � := f v } ← A ∼ x ← A : Ψ ⇒ Φ � x $ $ 31/59
From pRHL to probabilities Assume � c 1 ∼ c 2 : P ⇒ Q For all memories m 1 and m 2 such that P m 1 m 2 and events A , B such that Q = ⇒ A � 1 � ⇔ B � 2 � we have � c 1 � m 1 1 A = � c 2 � m 2 1 B i.e., Pr [ c 1 , m 1 : A ] = Pr [ c 2 , m 2 : B ] 32/59
Observational equivalence Definition def f = X g = ∀ m 1 m 2 . m 1 = X m 2 = ⇒ f m 1 = g m 2 � c 1 ≃ I def O c 2 = ∀ m 1 m 2 f g . m 1 = I m 2 ∧ f = O g = ⇒ � c 1 � m 1 f = � c 2 � m 2 g Example ← { 0 , 1 } k ; y ← x ⊕ z ≃ { z } ← { 0 , 1 } k ; x ← y ⊕ z � x { x , y , z } y $ $ Only a Partial Equivalence Relation � c ≃ I O c not true in general (obviously) Generalizes information flow security (take I = O = V low ) 33/59
Observational equivalence Definition def f = X g = ∀ m 1 m 2 . m 1 = X m 2 = ⇒ f m 1 = g m 2 � c 1 ≃ I def O c 2 = ∀ m 1 m 2 f g . m 1 = I m 2 ∧ f = O g = ⇒ � c 1 � m 1 f = � c 2 � m 2 g Example ← { 0 , 1 } k ; y ← x ⊕ z ≃ { z } ← { 0 , 1 } k ; x ← y ⊕ z � x { x , y , z } y $ $ Only a Partial Equivalence Relation � c ≃ I O c not true in general (obviously) Generalizes information flow security (take I = O = V low ) 33/59
Proving program equivalence Goal � c 1 ≃ I O c 2 34/59
Proving program equivalence Goal � c 1 ≃ I O c 2 Mechanized program transformations Transformation: T ( c 1 , c 2 , I , O ) = ( c ′ 1 , c ′ 2 , I ′ , O ′ ) Soundness theorem 1 ≃ I ′ T ( c 1 , c 2 , I , O ) = ( c ′ 1 , c ′ 2 , I ′ , O ′ ) � c ′ O ′ c ′ 2 � c 1 ≃ I O c 2 Reflection-based Coq tactic (replace reasoning by computation) 34/59
Proving program equivalence Goal � c 1 ≃ I O c 2 Mechanized program transformations Dead code elimination ( deadcode ) Constant folding and propagation ( ep ) Procedure call inlining ( inline ) Code movement ( swap ) Common suffix/prefix elimination ( eqobs hd , eqobs tl ) 34/59
Proving program equivalence Goal � c ≃ I O c An –incomplete– tactic for self-equivalence ( eqobs in ) Does � c ≃ I O c hold? Analyze dependencies to compute I ′ s.t. � c ≃ I ′ O c Check that I ′ ⊆ I Think about type systems for information flow security 34/59
Example: ElGamal encryption Lemma foo: � ElGamal 0 ≃ ∅ { d } DDH 0 x ← Z q ; y ← Z q ; $ $ ( m 0 , m 1 ) ← A ( g x ); Proof. ← { 0 , 1 } ; b $ ζ ← g xy × m b ; b ′ ← A ′ ( g x , g y , ζ ); d ← b = b ′ ≃ ∅ { d } x ← Z q ; $ y ← Z q ; $ d ← B ( g x , g y , g xy ) 35/59
Example: ElGamal encryption Lemma foo: � ElGamal 0 ≃ ∅ { d } DDH 0 x ← Z q ; y ← Z q ; $ $ ( m 0 , m 1 ) ← A ( g x ); Proof. ← { 0 , 1 } ; b $ inline_r B. ζ ← g xy × m b ; b ′ ← A ′ ( g x , g y , ζ ); d ← b = b ′ ≃ ∅ { d } x ← Z q ; $ y ← Z q ; $ α ← g x ; β ← g y ; γ ← g xy ; ( m 0 , m 1 ) ← A ( α ); ← { 0 , 1 } ; b $ b ′ ← A ′ ( α, β, γ × m b ); d ← b = b ′ 35/59
Example: ElGamal encryption Lemma foo: � ElGamal 0 ≃ ∅ { d } DDH 0 x ← Z q ; y ← Z q ; $ $ ( m 0 , m 1 ) ← A ( g x ); Proof. ← { 0 , 1 } ; b $ inline_r B. ζ ← g xy × m b ; ep. b ′ ← A ′ ( g x , g y , g xy × m b ); d ← b = b ′ ≃ ∅ { d } x ← Z q ; $ y ← Z q ; $ α ← g x ; β ← g y ; γ ← g xy ; ( m 0 , m 1 ) ← A ( g x ); ← { 0 , 1 } ; b $ b ′ ← A ′ ( g x , g y , g xy × m b ); d ← b = b ′ 35/59
Example: ElGamal encryption Lemma foo: � ElGamal 0 ≃ ∅ { d } DDH 0 x ← Z q ; y ← Z q ; $ $ ( m 0 , m 1 ) ← A ( g x ); Proof. ← { 0 , 1 } ; b $ inline_r B. ζ ← g xy × m b ; ep. b ′ ← A ′ ( g x , g y , g xy × m b ); deadcode. d ← b = b ′ ≃ ∅ { d } x ← Z q ; $ y ← Z q ; $ α ← g x ; β ← g y ; γ ← g xy ; ( m 0 , m 1 ) ← A ( g x ); ← { 0 , 1 } ; b $ b ′ ← A ′ ( g x , g y , g xy × m b ); d ← b = b ′ 35/59
Example: ElGamal encryption Lemma foo: � ElGamal 0 ≃ ∅ { d } DDH 0 x ← Z q ; y ← Z q ; $ $ ( m 0 , m 1 ) ← A ( g x ); Proof. ← { 0 , 1 } ; b $ inline_r B. b ′ ← A ′ ( g x , g y , g xy × m b ); ep. d ← b = b ′ deadcode. ≃ ∅ { d } x ← Z q ; y ← Z q ; $ $ ( m 0 , m 1 ) ← A ( g x ); ← { 0 , 1 } ; b $ b ′ ← A ′ ( g x , g y , g xy × m b ); d ← b = b ′ 35/59
Example: ElGamal encryption Lemma foo: � ElGamal 0 ≃ ∅ { d } DDH 0 x ← Z q ; y ← Z q ; $ $ ( m 0 , m 1 ) ← A ( g x ); Proof. ← { 0 , 1 } ; b $ inline_r B. b ′ ← A ′ ( g x , g y , g xy × m b ); ep. d ← b = b ′ deadcode. eqobs_in. Qed. ≃ ∅ { d } x ← Z q ; y ← Z q ; $ $ ( m 0 , m 1 ) ← A ( g x ); ← { 0 , 1 } ; b $ Pr ElGamal 0 ,m [ b = b ′ ] = Pr DDH 0 ,m [ b = b ′ ] b ′ ← A ′ ( g x , g y , g xy × m b ); d ← b = b ′ 35/59
Demo: ElGamal encryption
Reasoning about Failure Events Lemma (Fundamental Lemma of Game-Playing) Let A , B , F be events and G 1 , G 2 be two games such that Pr [G 1 : A ∧ ¬ F ] = Pr [G 2 : B ∧ ¬ F ] Then, | Pr [G 1 : A ] − Pr [G 2 : B ] | ≤ Pr [G 1 , 2 : F ] 37/59
Automation When A = B and F = bad. If G 0 , G 1 are syntactically identical except after program points setting bad e.g. Game G 0 : Game G 1 : . . . . . . bad ← true; c 0 bad ← true; c 1 . . . . . . then Pr G 1 , m [ A | ¬ bad] = Pr G 2 , m [ A | ¬ bad] Pr G 1 , m [bad] = Pr G 2 , m [bad] Corollary | Pr G 1 , m [ A ] − Pr G 2 , m [ A ] | ≤ Pr G 1 , 2 [bad] 38/59
Automation When A = B and F = bad. If G 0 , G 1 are syntactically identical except after program points setting bad e.g. Game G 0 : Game G 1 : . . . . . . bad ← true; c 0 bad ← true; c 1 . . . . . . then Pr G 1 , m [ A | ¬ bad] = Pr G 2 , m [ A | ¬ bad] Pr G 1 , m [bad] = Pr G 2 , m [bad] Corollary | Pr G 1 , m [ A ] − Pr G 2 , m [ A ] | ≤ Pr G 1 , 2 [bad] 38/59
Application: PRP/PRF Switching Lemma Game G RP : Game G RF : L ← nil; b ← A () L ← nil; b ← A () Oracle O ( x ) : Oracle O ( x ) : if x / ∈ dom( L ) then if x / ∈ dom( L ) then ← { 0 , 1 } ℓ \ ran( L ); ← { 0 , 1 } ℓ ; y y $ $ L ← ( x , y ) :: L L ← ( x , y ) :: L return L ( x ) return L ( x ) Suppose A makes at most q queries to O . Then | Pr [G RP : b ] − Pr [G RF : b ] | ≤ q ( q − 1) 2 ℓ +1 First introduced by Impagliazzo and Rudich in 1989 Proof fixed by Bellare and Rogaway (2006) and Shoup (2004) 39/59
Proof Game G RP : Game G RF : L ← nil; b ← A () L ← nil; b ← A () Oracle O ( x ) : Oracle O ( x ) : if x / ∈ dom( L ) then if x / ∈ dom( L ) then ← { 0 , 1 } ℓ ; ← { 0 , 1 } ℓ ; y y $ $ if y ∈ ran( L ) then ; if y ∈ ran( L ) then ; bad ← true; bad ← true ← { 0 , 1 } ℓ \ ran( L ) y $ L ← ( x , y ) :: L L ← ( x , y ) :: L return L ( x ) return L ( x ) | Pr [G RP : b ] − Pr [G RF : b ] | ≤ Pr [G RF : bad] 40/59
Proof Failure Event Lemma Let k be a counter for O and m (bad) = false: IF Pr [ O : bad] ≤ f ( m ( k )) q O − 1 � THEN Pr [G : bad] ≤ f ( k ) k =0 Oracle O ( x ) : ∈ dom( L ) then if x / ← { 0 , 1 } ℓ ; if y ∈ ran( L ) then bad ← true; y $ L ← ( x , y ) :: L return L ( x ) Prove that Pr [ O : bad] ≤ | L | / 2 ℓ Summing over the q queries, Pr [G RF : bad] ≤ q ( q − 1) 2 ℓ +1 41/59
Application Zero-Knowledge Proofs A Machine-Checked Formalization of Σ -Protocols. 23rd IEEE Computer Security Foundations Symposium, CSF 2010 42/59
Zero-Knowledge Proofs Peggy Victor 43/59
Zero-Knowledge Proofs Peggy Victor 43/59
Zero-Knowledge Proofs Peggy Victor 43/59
If you ever need to explain this to your kids How to Explain Zero-Knowledge Protocols to your Children Jean-Jacques Quisquater, Louis C. Guillou. CRYPTO’89 44/59
Properties of Zero-Knowledge Proofs Completeness A honest prover always convinces a honest verifier Soundness A dishonest prover (almost) never convinces a verifier Zero-Knowledge A verifier doesn’t learn anything from playing the protocol 45/59
Formalizing Σ -Protocols Prover knows ( x , w ) s.t. R ( x , w ) / Verifier knows only x Prover Verifier Computes commitment r r c Samples challenge c s Computes response s Accepts/rejects response 46/59
Formalizing Σ -Protocols Prover knows ( x , w ) s.t. R ( x , w ) / Verifier knows only x Prover Verifier ( r , state ) ← P 1 ( x , w ) r c c ← C $ s s ← P 2 ( x , w , state , c ) b ← V 2 ( x , r , c , s ) 46/59
Formalizing Σ -Protocols A Σ-protocol is given by: Types for x , w , r , s , state A knowledge relation R A challenge set C Procedures P 1 , P 2 , V 2 The protocol can be seen as a program protocol ( x , w ) : ( r , state ) ← P 1 ( x , w ); c ← C ; $ s ← P 2 ( x , w , state , c ); b ← V 2 ( x , r , c , s ) 47/59
Formalizing Σ -Protocols Completeness ∀ x , w . R ( x , w ) = ⇒ Pr [protocol( x , w ) : b = true] = 1 Soundness There exists a polynomial time procedure KE s.t. c 1 � = c 2 ( x , r , c 1 , s 1 ) accepting = ⇒ ( x , r , c 2 , s 2 ) accepting Pr [ w ← KE( x , r , c 1 , c 2 , s 1 , s 2 ) : R ( x , w )] = 1 48/59
Honest-Verifier ZK vs. Special Honest-Verifier ZK protocol ( x , w ) : protocol ( x , w , c ) : ( r , state ) ← P 1 ( x , w ); ( r , state ) ← P 1 ( x , w ); c ← C ; s ← P 2 ( x , w , state , c ); $ s ← P 2 ( x , w , state , c ); b ← V 2 ( x , r , c , s ) b ← V 2 ( x , r , c , s ) Special Honest-Verifier ZK ∃ S . ∀ x , w , c . R ( x , w ) = ⇒ � protocol( x , w , c ) ≃ { x , c } { r , c , s } ( r , s ) ← S( x , c ) Honest-Verifier ZK ∃ S . ∀ x , w . R ( x , w ) = ⇒ � protocol( x , w ) ≃ { x } { r , c , s } ( r , c , s ) ← S( x ) 49/59
Honest-Verifier ZK vs. Special Honest-Verifier ZK protocol ( x , w ) : protocol ( x , w , c ) : ( r , state ) ← P 1 ( x , w ); ( r , state ) ← P 1 ( x , w ); c ← C ; s ← P 2 ( x , w , state , c ); $ s ← P 2 ( x , w , state , c ); b ← V 2 ( x , r , c , s ) b ← V 2 ( x , r , c , s ) Special Honest-Verifier ZK ∃ S . ∀ x , w , c . R ( x , w ) = ⇒ � protocol( x , w , c ) ≃ { x , c } { r , c , s } ( r , s ) ← S( x , c ) Honest-Verifier ZK ∃ S . ∀ x , w . R ( x , w ) = ⇒ � protocol( x , w ) ≃ { x } { r , c , s } ( r , c , s ) ← S( x ) 49/59
What does it take to trust a proof in CertiCrypt Verification is fully-automated! (but proof construction is time-consuming) You need to trust the type checker of Coq trust the language semantics make sure the security statement (a few lines in Coq) is the one you expect You don’t need to understand or even read the proof trust tactics, program transformations trust program logics, wp-calculus be an expert in Coq 50/59
Contributions of CertiCrypt A language-based approach to computational crypto proofs Automated framework to formalize game-based proofs in Coq Probabilistic extension of Relational Hoare Logic Foundations for techniques used in crypto proofs Several case studies PRP/PRF switching lemma Chosen-plaintext security of ElGamal Chosen-plaintext security of Hashed ElGamal in ROM and SM Unforgeability of Full-Domain Hash signatures Adaptive chosen-ciphertext security of OAEP Σ-protocols IBE (F. Olmedo), Golle-Juels (Z. Luo), BLS (M. Christofi) 51/59
The road ahead CertiCrypt bridges the gap between fully formal machine-checked proofs and pen-and-paper proof sketches Fact: cryptographers won’t embrace proof assistants (CertiCrypt) anytime soon What if we start building a bridge from the other side? Start from a proof sketch and try to fill in the blanks and justify reasoning steps, building into the tool as much automation as possible. Record and highlight unjustified proof steps and let the user give finer-grained justifications—perhaps interactively, using automated tools. 52/59
EasyCrypt Computer-aided proofs for the working cryptographer 53/59
EasyCrypt Generate from a sequence of games and probabilistic stataments a fully verified proof. Rationale Crypto papers provide sequence of games and statements Probabilistic statements have a direct translation to relational Hoare judgments Challenges Automatic verification of relational Hoare judgments Invariant generation for adversaries 54/59
Automatic verification of relational Hoare judgments Verify the validity of � G 1 ∼ G 2 : Ψ ⇒ Φ by generating VCs and sending them to an SMT solver Key idea Use one-sided rules (a.k.a. self-composition), except for: Procedure calls (procedures have relational specs!): use inlining if possible if not (e.g. adversary calls), use two-sided rules. Needs call graphs to be similar Random assignments: put programs in static single (random) assignment form hoist random assignments use specialized two-sided Hoare rule for random assignments 55/59
Recommend
More recommend