human computer interaction
play

Human-Computer Interaction 12. Evaluating User Interface (3) Dr. - PowerPoint PPT Presentation

Human-Computer Interaction 12. Evaluating User Interface (3) Dr. Sunyoung Kim School of Communication & Information Rutgers university Recap: Why doing evaluation? If we build a product, service, an interface, etc., how do we know:


  1. Human-Computer Interaction 12. Evaluating User Interface (3) Dr. Sunyoung Kim School of Communication & Information Rutgers university

  2. Recap: Why doing evaluation? If we build a product, service, an interface, etc., how do we know: • § Whether it’s any good? § Whether the interface (between a system and user) meets requirements and criteria? § Whether the users are able to complete all important tasks? à Test Usability

  3. Recap: What is usability? “The effectiveness, efficiency and satisfaction with which a specified set of users can achieve a specified set of tasks in a particular environment.” (by ISO) 5 E’s • o Effective: Can a user reach one’s goals? Find what they are looking for? - Do what they want to do? - o Efficient: How fast to pursue the goals? Number of steps - o Engaging: Use it again? Recommend it to others? Number of revisits - o Error tolerant Number of errors - Satisfaction Recovering from errors - o Easy to learn Amount of effort to learn -

  4. Recap: When to evaluate? Throughout the design process • From the first descriptions, sketches, etc. of users needs through to the • final product Design proceeds through interactive cycles of “design – test - • redesign” Evaluation is a key ingredient for a successful design • Interactive Paper Wireframing prototyping sketches Coding User testing User testing User testing User testing

  5. Recap: Cognitive Walkthrough A usability evaluation method in which one or more evaluators work through a series of tasks and ask a set of questions from the perspective of the user. The focus of the cognitive walkthrough is on understanding the system's learnability for new or infrequent users To see whether or not a new user can easily carry out tasks within • a given system A task-specific approach to usability •

  6. Recap: Three Questions to be Asked The cognitive walkthrough is structured around 3 questions that you ask of every step (or action) in the task. You ask these questions before, during and after each step (or action) of the task. If you find a problem, you make a note and then move on to the next step of the task. 1. Visibility: Is the control for the action visible to the user? 2. Affordance: Is there a strong link between the control and the action? (Will the user notice that the correct action is available?) 3. Feedback: Is feedback appropriate? (Will the user properly interpret the system response?)

  7. Recap: Heuristic Evaluation A principle or “a rule of thumb” which can be used to identify usability problems in interaction design: a researcher walks through a product and compare it to the heuristics and make their own assessment as to whether the product follows these rules of thumb or not (the “heuristics”) To see whether or not a given system has any usability flaws • A more holistic usability inspection • Developed by Jakob Nielsen (1994) • Can be performed on working UI or on sketches •

  8. • Good ergonomics • Personalization • Privacy and social conventions

  9. Recap: Heuristic Evaluation Advantages It’s fast, quick and cheap to conduct a heuristic evaluation. This is • especially true if you are only going to use a single evaluator. It provides good insight into possible usability problems that might • damage the user experience. Problems A single evaluator may miss issues that are not readily apparent to • them. The use of more than one evaluator is recommended. Heuristic analysis is subjective. It does not “prove” anything and thus • findings may be open to debate. Experienced evaluators are hard to come by and that means that you • may need to use less skilled evaluators whose findings may not be as valuable.

  10. Today’s agenda Evaluating User Interface • Evaluation with users

  11. How to evaluate? Asking experts • – Experts’ opinions, inspections, walkthroughs – How do experts think the users will perform on a system? Asking users • – User opinions – how do users think they will perform on a system? 1. Testing users’ performance: user testing Evaluate with prototypes, portions of UI, whole UI • How do the users perform on a system based on performance on test • 2. Modeling users’ task performance Simulation of user performance • How do the users will perform on a system based on simulation •

  12. Usability testing (user testing) User testing is to check whether a developed system is usable • (measured on selected criteria) by the intended user population for their tasks Goals & questions focus on how well users perform tasks with the • product Sometimes compare multiple products or prototypes • Focus is often on: • – Time to complete task – Number & type of errors Data collected: video & user interaction logged • User satisfaction questionnaires (survey) provide data about user • opinions

  13. Usability testing: steps 1. Define the objectives 2. List the tasks that will be performed 3. Decide methodologies 4. Conduct a pilot test 5. Choose your users 6. Create timetable and task description 7. Choose the location 8. Collect data 9. Analyze the data

  14. 1. Define the objectives What questions do you want to answer with the usability test? • What hypothesis do you want to test with the usability test? •

  15. 2. List the tasks that will be performed Examples: Register an account, sign into your account, upload a • photo, accept a friend request Create an evaluation script • Forms to use when asking for permission to record • Informed consent: Minimal risk, information, comprehension, - voluntariness, participants rights, nondisclosure, confidentiality, waivers, legalese, expectations Ethics: we have a responsibility • To make participation voluntary with informed consent to avoid - pressure to participate To let them know they can stop at any time - To stress that we are testing the system, not them - To make data as anonymous as possible -

  16. 3. Decide methodologies

  17. 4. Conduct a pilot test To make sure everything is ready prior to conducting a usability test • The pilot test allows you to: • Test the equipment - Provides practice for the facilitator and note-takers - Get a good sense whether your questions and scenarios are clear - to the participant Make any last minute adjustments -

  18. 5. Choose your users Who is a real user? • – Users who reflect the different skills, domain knowledge, system experience Determine: Whether users working along or in pairs/teams • Number of participants • Offering incentives • – Thank you letter, pay for out-of-pocket expenses, samples, gifts Recruiting screeners and pre-test questionnaires •

  19. 6. Create timetable and task description Length of a session • Decide the duration of the evaluation session 30-90 minutes - Create an evaluation timetable • sessions, evaluation, reporting - Preparing task descriptions • the tasks the participant will perform while interacting with the - prototype during the evaluation

  20. 7. Choose the location Field studies – user’s own environment, the place the actual system • will be used Controlled studies – other than user’s environment •

  21. 8. Collect data Timing and logging actions • Automatic logging of keystrokes and mouse clicks - Usability testing tools, logging software for usability evaluations • (e.g., http://www.usefulusability.com/24-usability-testing-tools/) Video and audio recording • Eye-tracking equipment •

  22. Testing environment – usability lab

  23. Testing environment – usability lab

  24. Testing environment – usability lab

  25. Testing environment – usability lab

  26. Testing environment – mobile usability testing

  27. 8. Analyze the data Collating data • Summarizing data • Extract key comments from collected data - Quantitative data • Tabulations, charts - Descriptive statistics: mean median - Inferential statistics: tests of statistical significance - Qualitative data • Grouping comments or observed problems– Establish a coding - scheme

  28. 8. Analyze the data Review data to identify usability problems Usability defects could Irritate or confuse the user • Make a system hard to install, learn, or use • Cause mental overload for the user • Cause poor user performance (e.g., slow) • Violate design standards or guidelines • Reduce trust or credibility of the system • Tend to cause repeated errors • Make the system hard to market •

  29. The differences between mobile and desktop Mobile is mobile: the context constantly changes (location, • connectivity, the way they hold or deal with the device) Users are going to prefer short, simple interactions on mobile • A wealth of new personal data generation: GPS, pictures, their friends • and family, communication data between colleagues, friends, etc. Mobile creates real privacy concerns: contain far more personal data • than a desktop typically would. Everyone has a different device with slightly differing capabilities. • Mobile also encompasses tablets. • Mobile offers different forms of input. • Mobile users have profoundly different needs than desktop users. •

  30. Design process

  31. New sketching/prototyping tool https://aiexperiments.withgoogle.com/autodraw http://www.autodraw.com

  32. Individual assignment

Recommend


More recommend