Customized protocol modeling for detection of guessing and DoS attacks Bogdan Groza Marius Minea Institute e-Austria Timi¸ soara and Politehnica University of Timi¸ soara Formal Methods for Components and Objects Graz, 30 November 2010
Components and environment in security modeling Usual execution for components: (component step; environment step) * Modeling security: intruder interacts with protocol as protocol agent, or between protocol steps, using Dolev-Yao abilities (component step; environment steps*) *
Implementation choices ASLan is based on rewriting rules How to customize ? add extra intruder capabilities but: would require changing the backends add custom rewrite rules flexible: can control when they are executed Applications: denial of service attacks guessing attacks
DoS attacks by resource exhaustion General intuition: force user to consume excessive resources with less resource use by attacker In cryptographic protocols: some operations more expensive (exponentiation, public-key encryption/decryption, signatures) Root causes: design flaws cost imbalance (usually affects server side) solution: cryptographic puzzles (client puzzles) lack of authenticity : adversary can steal computational work basic principle: include sender identity in message
Classifying DoS attacks Excessive use no abnormal protocol use adversary consumes less resources than honest principals (flooding, spam, ...) Malicious use adversary brings protocol to abnormal state protocol goals not completed correctly
Modelling cost in transition rules Augment model with cost for protocol transitions : LHS . cost ( P , C 1 ) ⇒ RHS . cost ( P , C 2 ) Add cost of cryptographic primitives in intruder deductions : iknows ( X ) . iknows ( Y ) . cost ( i , C 1 ) . sum ( C 1 , c op , C 2 ) ⇒ iknows ( op ( X , Y )) . cost ( i , C 2 ) with op ∈ { exp , enc , sig } iknows ( enc ( K , X )) . iknows ( K ) . cost ( i , C 1 ) . sum ( C 1 , c dec , C 2 ) ⇒ iknows ( X ) . cost ( i , C 2 ) (for decryption) Cost set: monoid { 0 , low , high , expensive } [Meadows’01]
Formalizing excessive use 1. session is initiated by adversary and 2. adversary cost less than honest principal cost dos excessive ( P ) := attack state initiate ( i ) . cost ( i , C i ) . cost ( P , C P ) . less ( C i , C P ) Augment rules : track only cost of adversary-initiated sessions ( SID ) LHS . initiate ( i , SID ) . cost ( P , C 1 ) . sum ( C 1 , c step , C 2 ) ⇒ RHS . cost ( P , C 2 ) LHS . initiate ( A , SID ) . not ( equal ( i , A )) ⇒ RHS ⇒ can also model distributed DoS
Formalizing malicious use In normal use (injective agreement), protocol events match : every message Msg received by Rcv from Snd in session SID at step label LbL is due to a corresponding send Absence of this property is an attack on protocol functionality (authentication) nagree ( Snd , Rcv , Msg , Lbl , SID ) := recv ( Snd , Rcv , Msg , Lbl , SID ) . not ( send ( Snd , Rcv , Msg , Lbl , SID )) Can happen due to adversary reusing values from previous runs ⇒ track agent cost in compromised sessions, but not in normal ones
Malicious use in multiple sessions 1. track per-session cost for normal sessions LHS . not ( bad ( SID )) . scost ( P , C 1 , SID ) . sum ( C 1 , c step , C 2 ) ⇒ RHS . scost ( P , C 2 , SID ) 2. identify session tampering ⇒ switch cost tracking LHS . not ( send ( S , P , M , L , SID )) . scost ( P , C 1 , SID ) . sum ( C 1 , c step , C 2 ) ⇒ RHS . recv ( S , P , M , L , SID ) . bad ( SID ) . cost ( P , C 2 ) 3. track per-principal cost for tampered sessions LHS . bad ( SID ) . cost ( P , C 1 ) . sum ( C 1 , c step , C 2 ) ⇒ RHS . bad ( SID ) . cost ( P , C 2 )
Case studies 1. Station-to-station protocol: reproduced Lowe’s attack A → B : α x B → A : α y , Cert B , E k ( sig B ( α y , α x )) A → B : Cert A , E k ( sig A ( α x , α y )) α x A → Adv(B): α x Adv → B: α y , Cert B , E k ( sig B ( α y , α x )) B → Adv: α y , Cert B , E k ( sig B ( α y , α x )) Adv(B) → A: Cert A , E k ( sig A ( α x , α y )) A → Adv(B): 2. Just Fast Keying (augmented with client puzzles): malicious use exploit honest initiator to solve puzzles of targeted responder
1 Custom rules for DoS attacks 2 Custom rules for guessing attacks
Why detect guessing attacks? Important weak passwords are common vulnerable protocols still in use Realistic , if secrets have low entropy Few tools capable to detect guessing attacks: Lowe ’02, Corin et al. ’04, Blanchet-Abadi-Fournet ’08
What is needed to guess ? 1. guess a value for the secret s 2. compute a verifier value that confirms the guess Example guessing conditions [Lowe, 2002] Adv knows v , E s ( v ): guess s , and verify known value v Adv knows E s ( v . v ): guess s , and obtain v in 2 ways Adv knows E s ( s ): guess s , and decrypt E s ( s ), verify that s is obtained
Goals for theory and implementation Detect both on-line and off-line attacks Distinguish blockable / non-blockable on-line attacks Handle cases where several verifiers match one guess Reason about chaining guesses of multiple secrets
To guess, we need pseudo-random one-way functions We can guess s from f ( s ) if f is injective in s . Generalized: f ( s , x ) is distinguishing in s (probabilistically) if polynomially many f ( s , x i ) can distinguish any s ′ � = s . Quantified: f ( s , x ) is strongly distinguishing in s after q queries if q values f ( s , x i ) can on average distinguish any s ′ � = s . Two main cases for guessing: knowing the image of a one-way function on the secret knowing the image of trap-door one-way function on the secret
Oracles and the adversary Oracle : abstract view of computation (function) may be off-line or on-line , employing honest principal An adversary: observes the oracle for a secret s if he knows a term that contains the secret s ihears ( Term ) ∧ Term ⊢ part s ⇒ observes ( O Term ( · )) s controls the oracle for a secret s if he can generate terms that contains fresh replacements for the secret s ihears ( Term ) ∧ Term ⊢ s ← igen ( s ′ ) Term ′ ∧ iknows ( Term ′ ) ⇒ controls ( O Term ( · )) s
What guesses can be verified ? (1) an already known term: vrfy ( Term ) :– iknows ( Term ) a signature , if the public key and the message are known: vrfy ( sign ( inv ( PK ) , Term )) :– iknows ( PK ) , iknows ( Term ) a term from under a one-way function : vrfy ( Term ) :– iknows ( h ) , iknows ( apply ( h , Term ′ )) , part ( Term , Term ′ ) , controls ( Term , Term ′ )
What guesses can be verified ? (2) a ciphertext , if key is known (or decryption oracle controlled) and part of plaintext verifiable: vrfy ( scrypt ( K , Term )) :– iknows ( K ) , splitknow ( Term , T1 , T2 ) , vrfy ( T2 ) a key , if ciphertext known and part of plaintext verifiable: vrfy ( K ) :– ihears ( scrypt ( K , Term )) , splitknow ( Term , T1 , T2 ) , vrfy ( T2 ) where splitknow ( Term , T1 , T2 ) splits Term and adds iknows ( T1 ) e.g., from m . h ( m ) with iknows ( m ) can verify h ( m )
Actual guessing rules from one-way function images (allows guessing from h ( s ), m . h ( s . m ) etc.) guess ( s ) :– observes ( O f s ( · )) , controls ( O f s ( · )) by inverting one-way trapdoor functions (allows guessing from { m . m } s , m . { h ( m ) } s etc.) guess ( s ) :– observes ( O { T } K { T } K − 1 ) , controls ( O ) , s s splitknow ( T , T 1 , T 2 ) , vrfy ( T 2 )
Implementation Protocol: modelled using transition rules Guessing rules: modelled as Horn clauses Horn clauses are re-evaluated after each protocol step (and applied until transitive closure) allows natural modeling of recursive facts multiple (intruder) deductions applied after each protocol step crucial efficiency gain compared to modeling with transitions
Flavors of guessing off-line : terms constructed directly by intruder on-line : uses computations of honest protocol principals (intruder controls computation oracles with arbitrary inputs) undetectable (noted by Ding and Horster ’95) all participants terminate (no abnormal protocol activity) modeled by checking that all instances reach final state multiple secrets a guessed secret becomes known to the intruder allows chaining of guessing rules
Case study 1: Norwegian ATM Real case, described by Hole et al. (IEEE S&P 2007) 2001: money withdrawn from card within 1 hour of being stolen Question: could the thief have done it without knowing the PIN? Card setup: PIN and card-specific data DES-encrypted with unique bank key 16-bit truncation of result stored on card: ⌊ DES BK ( PIN . CV ) ⌋ 16 Suggested attack [Hole et al., 2007]: break bank key DES search, verifier is a legitimate card owned by adversary Problem: verifier only has 16 bits ⇒ 2 56 − 16 = 2 40 bank keys But: each honest card reduces key search space by 16 bits ⇒ ⌈ 56 / 16 ⌉ = 4 cards suffice
Norwegian ATM Model Card Issuing Stage: 1 . Bank → User : ⌊ DES BK ( PIN ) ⌋ 16 , PIN PIN Change Procedure: 1 . User → ATM : ⌊ DES BK ( PIN old ) ⌋ 16 , PIN old , PIN new 2 . ATM → User : ⌊ DES BK ( PIN new ) ⌋ 16 Known attack: f ( s , PIN ) = ⌊ DES s ( PIN ) ⌋ 16 is strongly distinguishing in 4 queries ( BK has 56 bits) ⇒ Adv can get 4 legitimate cards and break bank key New attack (for simplified scenario, PIN encrypted alone under BK ) if Adv can do unlimited PIN changes on own card ⇒ controls f ( BK , PIN ) in argument PIN ⇒ can guess BK
Recommend
More recommend