course outline the four hours
play

Course outline: the four hours 1. Language-Based Security: - PowerPoint PPT Presentation

Course outline: the four hours 1. Language-Based Security: motivation 2. Language-Based Information-Flow Security: the big picture 3. Dimensions and principles of declassification 4. Dynamic vs. static security enforcement 1 Dimensions of


  1. Course outline: the four hours 1. Language-Based Security: motivation 2. Language-Based Information-Flow Security: the big picture 3. Dimensions and principles of declassification 4. Dynamic vs. static security enforcement 1

  2. Dimensions of Declassification in Theory and Practice

  3. Confidentiality: preventing information leaks • Untrusted/buggy code should not leak sensitive information • But some applications depend on info intended information leaks – password checking – information purchase – spreadsheet computation – ... • Some leaks must be allowed: need information release (or declassification) 3

  4. Confidentiality vs. intended leaks • Allowing leaks might compromise confidentiality info • Noninterference is violated • How do we know secrets are not laundered via release mechanisms? • Need for security assurance info for programs with release 4

  5. State-of-the-art conditioned relaxed noninterference noninterference robust admissibility declassification harmless flows intransitive partial security noninterference delimited release relative secrecy selective conditional abstract flows noninterference noninterference noninterference computational quantitative “until” security security admissibility constrained approximate noninterference noninterference 5

  6. Dimensions of release “Who” conditioned relaxed noninterference noninterference robust admissibility declassification harmless flows intransitive partial security noninterference delimited release relative secrecy selective conditional abstract flows noninterference noninterference noninterference computational “Where” quantitative “until” security security admissibility constrained approximate noninterference noninterference “What” 6

  7. Principles of release “Who” conditioned relaxed noninterference noninterference robust admissibility declassification • Semantic harmless flows intransitive partial security noninterference consistency delimited release relative secrecy • Conservativity selective conditional abstract flows noninterference • Monotonicity noninterference noninterference “Where” computational quantitative “until” • Non-occlusion security security non-disclosure constrained approximate noninterference noninterference “What” 7

  8. What • Noninterference [Goguen & Meseguer]: as high input varied, low-level outputs unchanged h 1 h 2 h 1 ’ h 2 ’ l l l’ l’ • Selective (partial) flow – Noninterference within high sub-domains [Cohen’78, Joshi & Leino’00] – Equivalence-relations view [Sabelfeld & Sands’01] – Abstract noninterference [Giacobazzi & Mastroeni’04,’05] – Delimited release [Sabelfeld & Myers’04] • Quantitative information flow [Denning’82, Clark et al.’02, Lowe’02] 8

  9. Security lattice and noninterference H > l ’ Security lattice: e.g.: l L ? Noninterference: flow from l to l ’ allowed when l v l ’ 9

  10. Noninterference • Noninterference [Goguen & Meseguer]: as high input varied, low-level outputs unchanged h 1 h 2 h 1 ’ h 2 ’ l l l’ l’ • Language-based noninterference for c: M 1 = L M 2 & h M 1 ,c i  M’ 1 & h M 2 ,c i  M’ 2 ) M’ 1 = L M’ 2 Low-memory equality: Configuration M 1 = L M 2 iff M 1 | L =M 2 | L with M 2 and c 10

  11. Average salary • Intention: release average avg:=declassify((h 1 +...+h n )/n,low) ; • Flatly rejected by noninterference • If accepting, how do we know declassify does not release more than intended? • Essence of the problem: what is released? • “Only declassified data and no further information” • Expressions under declassify: “escape hatches” 11

  12. Delimited release [Sabelfeld & Myers, ISSS’03] if M 1 and M 2 are indistinguishable • Command c has expressions through all e i … declassify(e i ,L); c is secure if: M 1 = L M 2 & h M 1 ,c i  M’ 1 & h M 2 ,c i  M’ 2 & 8 i .eval(M 1 ,e i )=eval(M 2 ,e i ) ) M’ 1 = L M’ 2 • Noninterference ) security …then the entire • For programs with no program may not declassification: distinguish M 1 and M 2 Security ) noninterference 12

  13. Average salary revisited • Accepted by delimited release: avg:=declassify((h 1 +...+h n )/n,low); temp:=h 1 ; h 1 :=h 2 ; h 2 :=temp; avg:=declassify((h 1 +...+h n )/n,low); • Laundering attack rejected: h 2 :=h 1 ;...; h n :=h 1 ; avg:=declassify((h 1 +...+h n )/n,low); » avg:=h 1 13

  14. Electronic wallet • If enough money then purchase if declassify(h ¸ k,low) then (h:=h-k; l:=l+k); amount cost spent in wallet • Accepted by delimited release 14

  15. Electronic wallet attack • Laundering bit-by-bit attack (h is an n- bit integer) l:=0; while(n ¸ 0) do k:=2 n-1 ; l:=h » if declassify(h ¸ k,low) then (h:=h-k; l:=l+k); n:=n-1; • Rejected by delimited release 15

  16. Security type system • Basic idea: prevent new information from flowing into variables used in escape hatch expressions may not use while … do other (than h) h :=…; declassify(h,low) high variables … … may not use declassify(h,low) h :=…; other (than h) high variables • Theorem: c is typable ) c is secure 16

  17. Who • Robust declassification in a language setting [Myers, Sabelfeld & Zdancewic’04/06] • Command c[ ² ] has robustness if 8 M 1 ,M 2 ,a,a’ . h M 1 ,c[a] i ¼ L h M 2 ,c[a] i ) h M 1 ,c[a’] i ¼ L h M 2 ,c[a’] i attacks • If a cannot distinguish bet. M 1 and M 2 through c then no other a’ can distinguish bet. M 1 and M 2 17

  18. Robust declassification: examples • Flatly rejected by noninterference, but secure programs satisfy robustness: [ ² ]; if x LH then [ ² ]; x LH :=declassify(y HH ,LH) y LH :=declassify(z HH ,LH) • Insecure program: [ ² ]; if x LL then y LL :=declassify(z HH ,LH) is rejected by robustness 18

  19. Enforcing robustness • Security typing for declassification: context data must must be be high- high- integrity integrity LH ` e : HH LH ` declassify(e, l ’): LH 19

  20. Where • Intransitive (non)interference – assurance for intransitive flow [Rushby’92, Pinsky’95, Roscoe & Goldsmith’99] – nondeterministic systems [Mantel’01] – concurrent systems [Mantel & Sands’04] – to be declassified data must pass a downgrader [ Ryan & Schneider’99, Mullins’00, Dam & Giambiagi’00, Bossi et al.’04, Echahed & Prost’05, Almeida Matos & Boudol’05 ] 20

  21. When • Time-complexity based attacker – password matching [Volpano & Smith’00] and one -way functions [Volpano’00] – poly- time process calculi [Lincoln et al.’98, Mitchell’01] – impact on encryption [Laud’01,’03] • Probabilistic attacker [DiPierro et al.’02, Backes & Pfitzmann’03] • Relative: specification-bound attacker [Dam & Giambiagi’00,’03] • Non- interference “until” [Chong & Myers’04] 21

  22. Principle I Semantic consistency The (in)security of a program is invariant under semantics-preserving transformations of declassification-free subprograms • Aid in modular design • “What” definitions generally semantically consistent • Uncovers semantic anomalies 22

  23. Principle II Conservativity Security for programs with no declassification is equivalent to noninterference • Straightforward to enforce (by definition); nevertheless: • Noninterference “until” rejects if h>h then l:=0 23

  24. Principle III Monotonicity of release Adding further declassifications to a secure program cannot render it insecure • Or, equivalently, an insecure program cannot be made secure by removing declassification annotations • “Where”: intransitive noninterference (a la M&S) fails it; declassification actions are observable if h then declassify(l=l) else l=l 24

  25. Principle IV Occlusion The presence of a declassification operation cannot mask other covert declassifications 25

  26. Checking the principles 26

  27. Declassification in practice: A case study [Askarov & Sabelfeld, ESORICS’05] • Use of security-typed languages for implementation of crypto protocols • Mental Poker protocol by [Roca et.al, 2003] – Environment of mutual distrust – Efficient • Jif language [Myers et al., 1999-2005] – Java extension with security types – Decentralized Label Model – Support for declassification • Largest code written in security-typed language up to publ date [~4500 LOC] 27

  28. Security assurance/Declassification Group Pt. What Who Where 1 Public key for signature Anyone Initialization I 2 Public security parameter Player Initialization 3 Message signature Player Sending msg 4-7 Protocol initialization data Player Initialization II 8- Encrypted permuted card Player Card 10 drawing 11 Decryption flag Player Card III drawing 12- Player’s secret encryption Player Verification IV key 13 Player Verification 14 Player’s secret permutation Group I – naturally public data Group II – required by crypto protocol Group III – success flag pattern Group IV – revealing keys for verification 28

Recommend


More recommend