CertiCrypt Language-Based Cryptographic Proofs in Coq Gilles Barthe 1 , 2 Benjamin Grégoire 1 , 3 Santiago Zanella 1 , 3 1 Microsoft Research - INRIA Joint Centre, France 2 IMDEA Software, Madrid, Spain 3 INRIA Sophia Antipolis - Méditerranée, France POPL 2009
What’s wrong with cryptographic proofs? In our opinion, many proofs in cryptography have become essentially unverifiable. Our field may be approaching a crisis of rigor M. Bellare and P . Rogaway. Do we have a problem with cryptographic proofs? Yes, we do [...] We generate more proofs than we carefully verify (and as a consequence some of our published proofs are incorrect) S. Halevi Security proofs in cryptography may be organized as sequences of games [...] this can be a useful tool in taming the complexity of security proofs that might otherwise become so messy, complicated, and subtle as to be nearly impossible to verify V. Shoup
What’s wrong with cryptographic proofs? In our opinion, many proofs in cryptography have become essentially unverifiable. Our field may be approaching a crisis of rigor M. Bellare and P . Rogaway. Do we have a problem with cryptographic proofs? Yes, we do [...] We generate more proofs than we carefully verify (and as a consequence some of our published proofs are incorrect) S. Halevi Security proofs in cryptography may be organized as sequences of games [...] this can be a useful tool in taming the complexity of security proofs that might otherwise become so messy, complicated, and subtle as to be nearly impossible to verify V. Shoup
Game-based cryptographic proofs Attack Game Game G η 0 : . . . A . . . Pr G η 0 [ A 0 ] Pr G η 0 [ A 0 ] ≤ ǫ ( η ) Security property
Game-based cryptographic proofs Attack Game Final Game Game G η Game G η Game G η 0 : 1 : n : . . . . . . . . . A . . . . . . Pr G η 0 [ A 0 ] ≤ h 1 ( Pr G η 1 [ A 1 ]) ≤ · · · ≤ h n ( Pr G η n [ A n ]) Pr G η 0 [ A 0 ] ≤ h ( Pr G η n [ A n ]) ≤ ǫ ( η )
Game-based proofs: essence and problems Independent events . . . G G 0 G ′ . . . Pr G 0 [ A 0 ] ≤ h ( Pr G [ A ]) × h ′ ( Pr G ′ [ A ′ ]) Essence: relate the probability of events in consecutive games But, How do we represent games? What adversaries are feasible ? How do we make a proof hold for any feasible adversary?
Game-based proofs: essence and problems Independent events . . . G G 0 G ′ . . . Pr G 0 [ A 0 ] ≤ h ( Pr G [ A ]) × h ′ ( Pr G ′ [ A ′ ]) Essence: relate the probability of events in consecutive games But, How do we represent games? What adversaries are feasible ? How do we make a proof hold for any feasible adversary?
Language-based proofs What if we represent games as programs? Games = ⇒ programs ⇒ Probability space = program denotation Game transformations = ⇒ program transformations ⇒ Generic adversary = unspecified procedure Feasibility = ⇒ Probabilistic Polynomial-Time
P W HILE : a probabilistic programming language I ::= V ← E assignment | V ← D $ random sampling | if E then C else C conditional | while E do C while loop | V ← P ( E , . . . , E ) procedure call C ::= nil nop | I ; C sequence Measure monad: M ( X ) def = ( X → [ 0 , 1 ]) → [ 0 , 1 ] � · � : C → M → M ( M ) ← { 0 , 1 } ; y ← { 0 , 1 } � m = � x $ $ Probability: Pr G , m [ A ] def = � G � m ✶ A
P W HILE : a probabilistic programming language I V ← E ::= assignment | V ← D random sampling $ | if E then C else C conditional | while E do C while loop | V ← P ( E , . . . , E ) procedure call C ::= nil nop | I ; C sequence Measure monad: M ( X ) def = ( X → [ 0 , 1 ]) → [ 0 , 1 ] � · � : C → M → M ( M ) � x ← { 0 , 1 } ; y ← { 0 , 1 } � m f = $ $ 1 1 4 f ( m [ 0 , 0 / x , y ]) + 4 f ( m [ 0 , 1 / x , y ]) + 1 1 4 f ( m [ 1 , 0 / x , y ]) + 4 f ( m [ 1 , 1 / x , y ]) Probability: Pr G , m [ A ] def = � G � m ✶ A
P W HILE : a probabilistic programming language I V ← E ::= assignment | V ← D random sampling $ | if E then C else C conditional | while E do C while loop | V ← P ( E , . . . , E ) procedure call C ::= nil nop | I ; C sequence Measure monad: M ( X ) def = ( X → [ 0 , 1 ]) → [ 0 , 1 ] � · � : C → M → M ( M ) � x ← { 0 , 1 } ; y ← { 0 , 1 } � m ✶ x � = y = $ $ 1 1 4 ✶ x � = y ( m [ 0 , 0 / x , y ]) + 4 ✶ x � = y ( m [ 0 , 1 / x , y ]) + 1 1 4 ✶ x � = y ( m [ 1 , 0 / x , y ]) + 4 ✶ x � = y ( m [ 1 , 1 / x , y ]) Probability: Pr G , m [ A ] def = � G � m ✶ A
P W HILE : a probabilistic programming language I V ← E ::= assignment | V ← D random sampling $ | if E then C else C conditional | while E do C while loop | V ← P ( E , . . . , E ) procedure call C ::= nil nop | I ; C sequence Measure monad: M ( X ) def = ( X → [ 0 , 1 ]) → [ 0 , 1 ] � · � : C → M → M ( M ) � x ← { 0 , 1 } ; y ← { 0 , 1 } � m ✶ x � = y = $ $ 1 0 + + 4 1 + 0 4 Probability: Pr G , m [ A ] def = � G � m ✶ A
P W HILE : a probabilistic programming language I V ← E ::= assignment | V ← D random sampling $ | if E then C else C conditional | while E do C while loop | V ← P ( E , . . . , E ) procedure call C ::= nil nop | I ; C sequence Measure monad: M ( X ) def = ( X → [ 0 , 1 ]) → [ 0 , 1 ] � · � : C → M → M ( M ) ← { 0 , 1 } � m ✶ x � = y = 1 � x ← { 0 , 1 } ; y $ $ 2 Probability: Pr G , m [ A ] def = � G � m ✶ A
Untyped vs. typed language 1 st attempt: untyped language, lots of problems No guarantee that programs are well-typed Had to deal with ill-typed programs 2 nd attempt: typed language (dependently typed syntax!) Programs are well-typed by construction Inductive I : Type := | Assign : ∀ t , V t → E t → I | Rand : ∀ t , V t → D t → I | Cond : E Bool → C → C → I | While : E Bool → C → I | Call : ∀ l t , P ( l , t ) → V t → E ⋆ l → I where C := I ⋆ . Parametrized semantics: � · � : ∀ η, C → M → M ( M )
Untyped vs. typed language 1 st attempt: untyped language, lots of problems No guarantee that programs are well-typed Had to deal with ill-typed programs 2 nd attempt: typed language (dependently typed syntax!) Programs are well-typed by construction Inductive I : Type := | Assign : ∀ t , V t → E t → I | Rand : ∀ t , V t → D t → I | Cond : E Bool → C → C → I | While : E Bool → C → I | Call : ∀ l t , P ( l , t ) → V t → E ⋆ l → I where C := I ⋆ . Parametrized semantics: � · � : ∀ η, C → M → M ( M )
Characterizing feasible adversaries A cost model for reasoning about program complexity � · � ′ : ∀ η, C → ( M × N ) → M ( M × N ) Non-intrusive: � G � m = bind ( � G � ′ ( m , 0 )) ( λ mn . unit ( fst mn )) A program G runs in probabilistic polynomial time if: It terminates with probablity 1 (i.e. ∀ m , Pr G , m [ true ] = 1) There exists a polynomial p ( · ) s.t. if ( m ′ , n ) is reachable with positive probability, then n ≤ p ( η )
Characterizing feasible adversaries A cost model for reasoning about program complexity � · � ′ : ∀ η, C → ( M × N ) → M ( M × N ) Non-intrusive: � G � m = bind ( � G � ′ ( m , 0 )) ( λ mn . unit ( fst mn )) A program G runs in probabilistic polynomial time if: It terminates with probablity 1 (i.e. ∀ m , Pr G , m [ true ] = 1) There exists a polynomial p ( · ) s.t. if ( m ′ , n ) is reachable with positive probability, then n ≤ p ( η )
Program equivalence Definition (Observational equivalence) def f = X g ∀ m 1 m 2 , m 1 ( X ) = m 2 ( X ) = ⇒ f m 1 = g m 2 = � G 1 ≃ I def ∀ m 1 m 2 f g , m 1 ( I ) = m 2 ( I ) ∧ f = O g = ⇒ O G 2 = � G 1 � m 1 f = � G 2 � m 2 g Generalizes information flow security. But is not general enough... ??? � if x = 0 then y ← x else y ← 1 ≃ { x } { x , y } if x = 0 then y ← 0 else y ← 1
Program equivalence Definition (Observational equivalence) def f = X g ∀ m 1 m 2 , m 1 ( X ) = m 2 ( X ) = ⇒ f m 1 = g m 2 = � G 1 ≃ I def ∀ m 1 m 2 f g , m 1 ( I ) = m 2 ( I ) ∧ f = O g = ⇒ O G 2 = � G 1 � m 1 f = � G 2 � m 2 g Generalizes information flow security. But is not general enough... ??? � if x = 0 then y ← x else y ← 1 ≃ { x } { x , y } if x = 0 then y ← 0 else y ← 1
Program equivalence Definition (Observational equivalence, generalization) � G 1 ∼ G 2 : Ψ ⇒ Φ def = ∀ m 1 m 2 . m 1 Ψ m 2 ⇒ � G 1 � m 1 ∼ Φ � G 2 � m 2 Where ∼ Φ is the lifting of relation Φ from memories to distributions. ( x = 0 ) ∼ { x } ( x = 0 ) � y ← x ∼ y ← 0 : = { x } ∧ ( x = 0 ) � 1 � ⇒ = { x , y } � y ← 1 ∼ y ← 1 : = { x } ∧ ( x � = 0 ) � 1 � ⇒ = { x , y } if x = 0 then y ← x else y ← 1 ∼ if x = 0 then y ← 0 else y ← 1 : = { x } ⇒ = { x , y }
From program equivalence to probability Let A be an event that depends only on variables in O To prove Pr G 1 , m 1 [ A ] = Pr G 2 , m 2 [ A ] it suffices to show � G 1 ≃ I O G 2 m 1 = I m 2
Proving program equivalence Goal � G 1 ≃ I O G 2 A Relational Hoare Logic 2 : Φ ′ ⇒ Φ ′′ � c 1 ∼ c 2 : Φ ⇒ Φ ′ � c ′ 1 ∼ c ′ [ R-Seq ] � c 1 ; c ′ 1 ∼ c 2 ; c ′ 2 : Φ ⇒ Φ ′′ . . .
More recommend