principles for secure design
play

Principles for secure design Some of the slides and content are - PowerPoint PPT Presentation

Principles for secure design Some of the slides and content are from Mike Hicks Coursera course Making secure software Flawed approach : Design and build software, and ignore security at first Add security once the functional


  1. Supporting mechanisms These relate identities (“ principals ”) to actions Authentication Authorization Audit-ability How can a system 
 How can a system 
 How can a system 
 tell who a user is tell what a user is 
 tell what a user did allowed to do What we know Access control policies Retain enough info 
 What we have 
 (defines) 
 to determine the What we are + circumstances of a 
 > 1 of the above = 
 Mediator breach Multi-factor authentication (checks)

  2. Defining Security Requirements • Many processes for deciding security requirements • Example: General policy concerns • Due to regulations /standards (HIPAA, SOX, etc.) • Due organizational values (e.g., valuing privacy) • Example: Policy arising from threat modeling • Which attacks cause the greatest concern ? Who are the likely adversaries and what are their goals and - methods? • Which attacks have already occurred ? Within the organization, or elsewhere on related systems? -

  3. Abuse Cases • Abuse cases illustrate security requirements • Where use cases describe what a system should do, abuse cases describe what it should not do • Example use case : The system allows bank managers to modify an account’s interest rate • Example abuse case : A user is able to spoof being a manager and thereby change the interest rate on an account

  4. Defining Abuse Cases • Construct cases in which an adversary’s exercise of power could violate a security requirement • Based on the threat model • What might occur if a security measure was removed? • Example : Co-located attacker steals password file and learns all user passwords • Possible if password file is not encrypted • Example : Snooping attacker replays a captured message, effecting a bank withdrawal • Possible if messages are have no nonce (a small amount of uniqueness/randomness - like the time of day or sequence number)

  5. Security design principles

  6. Design Defects = Flaws • Recall that software defects consist of both flaws and bugs • Flaws are problems in the design • Bugs are problems in the implementation • We avoid flaws during the design phase • According to Gary McGraw, 50% of security problems are flaws • So this phase is very important

  7. Categories of Principles

  8. Categories of Principles • Prevention • Goal : Eliminate software defects entirely • Example : Heartbleed bug would have been prevented by using a type-safe language, like Java

  9. Categories of Principles • Prevention • Goal : Eliminate software defects entirely • Example : Heartbleed bug would have been prevented by using a type-safe language, like Java • Mitigation • Goal : Reduce the harm from exploitation of unknown defects

  10. Categories of Principles • Prevention • Goal : Eliminate software defects entirely • Example : Heartbleed bug would have been prevented by using a type-safe language, like Java • Mitigation • Goal : Reduce the harm from exploitation of unknown defects • Example : Run each browser tab in a separate process, so exploitation of one tab does not yield access to data in another • Detection (and Recovery ) • Goal : Identify and understand an attack (and undo damage) • Example : Monitoring (e.g., expected invariants), snapshotting

  11. Principles for building secure systems General rules of thumb that, 
 when neglected, result in design flaws • Security is economics • Principle of least privilege • Accept that threat models change • Use fail-safe defaults • If you can’t prevent, detect • Use separation of responsibility • Design security from the ground up • Defend in depth • Prefer conservative designs • Account for human factors • Proactively study attacks • Ensure complete mediation • Kerkhoff’s principle

  12. “Security is economics” You can’t afford to secure against everything , so what do you defend against? 
 Answer: That which has the greatest “return on investment” THERE ARE NO SECURE SYSTEMS, ONLY DEGREES OF INSECURITY • In practice, need to resist a certain level of attack • Example: Safes come with security level ratings • “Safe against safecracking tools & 30 min time limit” • Corollary: Focus energy & time on weakest link • Corollary: Attackers follow the path of least resistance

  13. “Principle of least privilege” Give a program the access it legitimately needs to do its job. NOTHING MORE • This doesn’t necessarily reduce probability of failure • Reduces the EXPECTED COST • Example : Unix does a BAD JOB: • Every program gets all the privileges of the user who invoked it • vim as root: it can do anything -- should just get access to file • Example : Windows JUST AS BAD, MAYBE WORSE • Many users run as Administrator, • Many tools require running as Administrator

  14. “Use fail-safe defaults” Things are going to break. Break safely. • Default-deny policies • Start by denying all access • Then allow only that which has been explicitly permitted • Crash => fail to secure behavior • Example: firewalls explicitly decide to forward • Failure => packets don’t get through

  15. “Use separation of responsibility” Split up privilege so no one person or program has total power. • Example : US government • Checks and balances among different branches • Example : Movie theater • One employee sells tickets, another tears them • Tickets go into lockbox • Example : Nuclear weapons…

  16. Use separation of responsibility

  17. “Defend in depth” Use multiple, redundant protections • Only in the event that all of them have been breached should security be endangered. • Example : Multi-factor authentication: • Some combination of password, image selection, USB dongle, fingerprint, iris scanner,… (more on these later) • Example : “You can recognize a security guru who is particularly cautious if you see someone wearing both….”

  18. …a belt and suspenders

  19. Defense in depth …a belt and suspenders

  20. “Ensure complete mediation” Make sure your reference monitor sees every access to every object • Any access control system has some resource it needs to enforce • Who is allowed to access a files • Who is allowed to post to a message board… • Reference Monitor: The piece of code that checks for permission to access a resource

  21. Ensure complete mediation

  22. “Account for human factors” (1) “Psychological acceptability”: 
 Users must buy into the security model • The security of your system ultimately lies in the hands of those who use it. • If they don’t believe in the system or the cost it takes to secure it, then they won’t do it. • Example : “All passwords must have 15 characters, 3 numbers, 6 hieroglyphics, …”

  23. Account for human factors (“psychological acceptability”) 
 (1) Users must buy into the security

  24. “Account for human factors” (2) The security system must be usable • The security of your system ultimately lies in the hands of those who use it. • If it is too hard to act in a secure fashion, then they won’t do it. • Example : Popup dialogs

  25. Account for human factors (2) The security system must be usable

  26. Account for human factors (2) The security system must be usable

  27. Account for human factors (2) The security system must be usable

  28. Account for human factors (2) The security system must be usable

  29. “Account for human factors” (2) The security system must be usable • The security of your system ultimately lies in the hands of those who use it. • If it is too hard to act in a secure fashion, then they won’t do it. • Example : Popup dialogs

  30. “Kerkhoff’s principle” Don’t rely on security through obscurity • Originally defined in the context of crypto systems (encryption, decryption, digital signatures, etc.): • Crypto systems should remain secure even when an attacker knows all of the internal details • It is easier to change a compromised key than to update all code and algorithms • The best security is the light of day

  31. Kerkhoff’s principle??

  32. Kerkhoff’s principle!

  33. Principles for building secure systems Know these well: Self-explanatory: • Accept that threat models change; • Security is economics adapt your designs over time • Principle of least privilege • If you can’t prevent, detect • Use fail-safe defaults • Design security from the ground up • Use separation of responsibility • Prefer conservative designs • Defend in depth • Proactively study attacks • Account for human factors • Ensure complete mediation • Kerkhoff’s principle

  34. SANDBOXES Execution environment that restricts what 
 an application running in it can do NaCl’s Takes arbitrary x86, runs it in a sandbox in a browser restrictions Restrict applications to using a narrow API Data integrity: No reads/writes outside of sandbox No unsafe instructions CFI Chromium’s Runs each webpage’s rendering engine in a sandbox restrictions Restrict rendering engines to a narrow “kernel” API Data integrity: No reads/writes outside of sandbox 
 (incl. the desktop and clipboard)

  35. What have I done 
 ISOLATION to deserve this?

  36. Sandbox mental model Sandbox Narrow 
 • Even the untrusted code interface Untrusted 
 Trusted 
 needs input and output code & data code & data • The goal of the sandbox is to constrain what the untrusted Can access data All data and 
 program can execute, what Can make syscalls syscalls must 
 data it can access, what be accessed via 
 system calls it can make, etc. the narrow i/f

  37. Example sandboxing mechanism: SecComp • Linux system call enabled since 2.6.12 (2005) • Affected process can subsequently only perform read , write , exit , and sigreturn system calls No support for open call: Can only use already-open file descriptors - Isolates a process by limiting possible interactions • • Follow-on work produced seccomp-bpf • Limit process to policy-specific set of system calls , subject to a policy handled by the kernel Policy akin to Berkeley Packet Filters (BPF) - • Used by Chrome , OpenSSH , vsftpd , and others

  38. Idea: Isolate Flash Player

  39. Idea: Isolate Flash Player • Receive .swf code, save it .swf code

  40. Idea: Isolate Flash Player • Receive .swf code, save it • Call fork to create a new process .swf code

  41. Idea: Isolate Flash Player • Receive .swf code, save it • Call fork to create a new process • In the new process, open the file .swf code open

  42. Idea: Isolate Flash Player • Receive .swf code, save it • Call fork to create a new process • In the new process, open the file • Call exec to run Flash player .swf code open

  43. Idea: Isolate Flash Player • Receive .swf code, save it • Call fork to create a new process • In the new process, open the file • Call exec to run Flash player • Call seccomp-bpf to compartmentalize .swf code open

  44. Sandboxing as a design principle Sandbox • It’s not just 3rd-party code that Narrow 
 should be sandboxed: sandbox your interface own code, too! Untrusted 
 Trusted 
 code & data code & data • Break up your program into modules that separate responsibilities (what you should be doing anyway) 3rd party binaries (NaCl) • Give each module the least Webpages (Chromium) privileges it needs to do its job Modules of your own code: • Use the sandbox to enforce what Mitigate the impact of the inevitability 
 exactly a given module can/can’t do that your code has an exploitable bug

  45. Case study: VSFTPD

  46. Very Secure FTPD • FTP : File Transfer Protocol More popular before the rise of HTTP, but still in use - 90’s and 00’s: FTP daemon compromises were frequent and - costly , e.g., in Wu-FTPD, ProFTPd, … • Very thoughtful design aimed to prevent and mitigate security defects • But also to achieve good performance Written in C - • Written and maintained by Chris Evans since 2002 No security breaches that I know of - https://security.appspot.com/vsftpd.html

  47. VSFTPD Threat model Clients untrusted, until authenticated • • Once authenticated, limited trust: According to user’s file access control policy - For the files being served FTP (and not others) - • Possible attack goals Steal or corrupt resources (e.g., files, malware) - Remote code injection - • Circumstances: Client attacks server - Client attacks another client -

  48. Defense: Secure Strings struct mystr { char* PRIVATE_HANDS_OFF_p_buf; unsigned int PRIVATE_HANDS_OFF_len; unsigned int PRIVATE_HANDS_OFF_alloc_bytes; };

  49. Defense: Secure Strings struct mystr { char* PRIVATE_HANDS_OFF_p_buf; char* PRIVATE_HANDS_OFF_p_buf; unsigned int PRIVATE_HANDS_OFF_len; unsigned int PRIVATE_HANDS_OFF_alloc_bytes; }; Normal (zero-terminated) C string

  50. Defense: Secure Strings struct mystr { char* PRIVATE_HANDS_OFF_p_buf; unsigned int PRIVATE_HANDS_OFF_len; unsigned int PRIVATE_HANDS_OFF_len; unsigned int PRIVATE_HANDS_OFF_alloc_bytes; }; Normal (zero-terminated) C string The actual length (i.e., strlen(PRIVATE_HANDS_OFF_p_buf) )

  51. Defense: Secure Strings struct mystr { char* PRIVATE_HANDS_OFF_p_buf; unsigned int PRIVATE_HANDS_OFF_len; unsigned int PRIVATE_HANDS_OFF_alloc_bytes; unsigned int PRIVATE_HANDS_OFF_alloc_bytes; }; Normal (zero-terminated) C string The actual length (i.e., strlen(PRIVATE_HANDS_OFF_p_buf) ) Size of buffer returned by malloc

  52. Defense: Secure Strings struct mystr { char* PRIVATE_HANDS_OFF_p_buf; unsigned int PRIVATE_HANDS_OFF_len; unsigned int PRIVATE_HANDS_OFF_alloc_bytes; }; Normal (zero-terminated) C string The actual length (i.e., strlen(PRIVATE_HANDS_OFF_p_buf) ) Size of buffer returned by malloc

  53. void private_str_alloc_memchunk( struct mystr* p_str, const char* p_src, unsigned int len) { struct mystr … { } char* p_buf; unsigned int len; unsigned int alloc_bytes; }; void str_copy( struct mystr* p_dest, const struct mystr* p_src) { private_str_alloc_memchunk(p_dest, p_src->p_buf, p_src->len); } replace uses of char* with struct mystr* and uses of strcpy with str_copy

  54. void private_str_alloc_memchunk( struct mystr* p_str, const char* p_src, unsigned int len) { struct mystr /* Make sure this will fit in the buffer */ { unsigned int buf_needed; char* p_buf; if (len + 1 < len) unsigned int len; { unsigned int alloc_bytes; bug("integer overflow"); }; } Copy in at most len buf_needed = len + 1; if (buf_needed > p_str->alloc_bytes) bytes from p_src { into p_str str_free(p_str); s_setbuf(p_str, vsf_sysutil_malloc(buf_needed)); p_str->alloc_bytes = buf_needed; } vsf_sysutil_memcpy(p_str->p_buf, p_src, len); p_str->p_buf[len] = '\0'; p_str->len = len; }

  55. void private_str_alloc_memchunk( struct mystr* p_str, const char* p_src, unsigned int len) { struct mystr /* Make sure this will fit in the buffer */ { unsigned int buf_needed; char* p_buf; if (len + 1 < len) consider NUL unsigned int len; { terminator when unsigned int alloc_bytes; bug("integer overflow"); computing space }; } Copy in at most len buf_needed = len + 1; if (buf_needed > p_str->alloc_bytes) bytes from p_src { into p_str str_free(p_str); s_setbuf(p_str, vsf_sysutil_malloc(buf_needed)); p_str->alloc_bytes = buf_needed; } vsf_sysutil_memcpy(p_str->p_buf, p_src, len); p_str->p_buf[len] = '\0'; p_str->len = len; }

  56. void private_str_alloc_memchunk( struct mystr* p_str, const char* p_src, unsigned int len) { struct mystr /* Make sure this will fit in the buffer */ { unsigned int buf_needed; char* p_buf; if (len + 1 < len) consider NUL unsigned int len; { terminator when unsigned int alloc_bytes; bug("integer overflow"); computing space }; } Copy in at most len buf_needed = len + 1; if (buf_needed > p_str->alloc_bytes) allocate space, bytes from p_src { if needed into p_str str_free(p_str); s_setbuf(p_str, vsf_sysutil_malloc(buf_needed)); p_str->alloc_bytes = buf_needed; } vsf_sysutil_memcpy(p_str->p_buf, p_src, len); p_str->p_buf[len] = '\0'; p_str->len = len; }

  57. void private_str_alloc_memchunk( struct mystr* p_str, const char* p_src, unsigned int len) { struct mystr /* Make sure this will fit in the buffer */ { unsigned int buf_needed; char* p_buf; if (len + 1 < len) consider NUL unsigned int len; { terminator when unsigned int alloc_bytes; bug("integer overflow"); computing space }; } Copy in at most len buf_needed = len + 1; if (buf_needed > p_str->alloc_bytes) allocate space, bytes from p_src { if needed into p_str str_free(p_str); s_setbuf(p_str, vsf_sysutil_malloc(buf_needed)); p_str->alloc_bytes = buf_needed; } vsf_sysutil_memcpy(p_str->p_buf, p_src, len); copy in p_src p_str->p_buf[len] = '\0'; contents p_str->len = len; }

  58. Defense: Secure Stdcalls • Common problem: error handling

  59. Defense: Secure Stdcalls • Common problem: error handling • Libraries assume that arguments are well-formed

  60. Defense: Secure Stdcalls • Common problem: error handling • Libraries assume that arguments are well-formed • Clients assume that library calls always succeed

  61. Defense: Secure Stdcalls • Common problem: error handling • Libraries assume that arguments are well-formed • Clients assume that library calls always succeed • Example: malloc()

  62. Defense: Secure Stdcalls • Common problem: error handling • Libraries assume that arguments are well-formed • Clients assume that library calls always succeed • Example: malloc() • What if argument is non-positive? We saw earlier that integer overflows can induce this behavior - Leads to buffer overruns -

  63. Defense: Secure Stdcalls • Common problem: error handling • Libraries assume that arguments are well-formed • Clients assume that library calls always succeed • Example: malloc() • What if argument is non-positive? We saw earlier that integer overflows can induce this behavior - Leads to buffer overruns - • What if returned value is NULL? Oftentimes, a de-reference means a crash - On platforms without memory protection, a dereference can cause - corruption

Recommend


More recommend