outline
play

Outline Usability and security CSci 5271 Introduction to Computer - PDF document

Outline Usability and security CSci 5271 Introduction to Computer Security Announcements intermission Day 25: Usability and security Stephen McCamant Usable security example areas University of Minnesota, Computer Science & Engineering


  1. Outline Usability and security CSci 5271 Introduction to Computer Security Announcements intermission Day 25: Usability and security Stephen McCamant Usable security example areas University of Minnesota, Computer Science & Engineering Users are not ‘ideal components’ Most users are benign and sensible On the other hand, you can’t just treat users as Frustrates engineers: cannot give users instructions adversaries like a computer Some level of trust is inevitable Your institution is not a prison Closest approximation: military Also need to take advantage of user common sense Unrealistic expectations are bad for security and expertise A resource you can’t afford to pass up Don’t blame users Users as rational Economic perspective: users have goals and pursue “User error” can be the end of a discussion them This is a poor excuse They’re just not necessarily aligned with security Almost any “user error” could be avoidable with Ignoring a security practice can be rational if the better systems and procedures rewards is greater than the risk Perspectives from psychology User attention is a resource Users become habituated to experiences and Users have limited attention to devote to security processes Exaggeration: treat as fixed Learn “skill” of clicking OK in dialog boxes If you waste attention on unimportant things, it won’t Heuristic factors affect perception of risk be available when you need it Level of control, salience of examples Social pressures can override security rules Fable of the boy who cried wolf “Social engineering” attacks

  2. Research: ecological validity Research: deception and ethics User behavior with respect to security is hard to study Have to be very careful about ethics of experiments with human subjects Experimental settings are not like real situations Enforced by institutional review systems Subjects often: When is it acceptable to deceive subjects? Have little really at stake Many security problems naturally include deception Expect experimenters will protect them Do what seems socially acceptable Do what they think the experimenters want Outline Note to early readers Usability and security This is the section of the slides most likely to change in the final version Announcements intermission If class has already happened, make sure you have the latest slides for announcements Usable security example areas Outline Email encryption Technology became available with PGP in the early 90s Usability and security Classic depressing study: “Why Johnny can’t encrypt: a usability evaluation of PGP 5.0” (USENIX Announcements intermission Security 1999) Usable security example areas Still an open “challenge problem” Also some other non-UI difficulties: adoption, govt. policy Phishing Phishing defenses Educate users to pay attention to ❳ : Attacker sends email appearing to come from an Spelling ✦ copy from real emails institution you trust URL ✦ homograph attacks Links to web site where you type your password, SSL “lock” icon ✦ fake lock icon, or SSL-hosted attack etc. Extended validation (green bar) certificates Spear phishing : individually targeted, can be much Phishing URL blacklists more effective

  3. SSL warnings: prevalence Older SSL warning Browsers will warn on SSL certificate problems In the wild, most are false positives ❢♦♦✳❝♦♠ vs. ✇✇✇✳❢♦♦✳❝♦♠ Recently expired Technical problems with validation Self-signed certificates (HA2) Classic warning-fatigue danger SSL warnings: effectiveness Modern Firefox warning Early warnings fared very poorly in lab settings Recent browsers have a new generation of designs: Harder to click through mindlessly Persistent storage of exceptions Recent telemetry study: they work pretty well Modern Firefox warning (2) Modern Firefox warning (3) Spam-advertised purchases Advance fee fraud “Why do Nigerian Scammers say they are from “Replica” Rolex watches, herbal ❱✦❅❣r❅ , etc. Nigeria?” (Herley, WEIS 2012) This business is clearly unscrupulous; if I pay, will I Short answer: false positives get anything at all? Sending spam is cheap Empirical answer: yes, almost always But, luring victims is expensive Not a scam, a black market Scammer wants to minimize victims who respond but Importance of credit-card bank relationships ultimately don’t pay

  4. Trusted UI Smartphone app permissions Smartphone OSes have more fine-grained Tricky to ask users to make trust decisions based per-application permissions on UI appearance Access to GPS, microphone Lock icon in browser, etc. Access to address book Attacking code can draw lookalike indicators Make calls Lock favicon Phone also has more tempting targets Picture-in-picture attack Users install more apps from small providers Permissions manifest Time-of-use checks iOS approach: for narrower set of permissions, ask Android approach: present listed of requested on each use permissions at install time Can be hard question to answer hypothetically Proper context makes decisions clearer Users may have hard time understanding implications But, have to avoid asking about common things User choices seem to put low value on privacy iOS app store is also more closely curated Trusted UI for privileged actions Trusted UI works better when asking permission (e.g., Oakland’12) Say, “take picture” button in phone app Requested by app Drawn and interpreted by OS OS well positioned to be sure click is real Little value to attacker in drawing fake button

Recommend


More recommend