how humans work
play

How Humans Work Semester 2, 2009 1 Human-Machine Interaction - PDF document

How Humans Work Semester 2, 2009 1 Human-Machine Interaction Important Concepts Normans model of human task performance affordances constraints natural mapping visibility cultural expectations causality


  1. How Humans Work Semester 2, 2009 1 Human-Machine Interaction Important Concepts • Norman’s model of human task performance • affordances • constraints • natural mapping • visibility • cultural expectations • causality • transfer effects • mental models Notes These concepts are mainly due to Donald Norman. At a certain level, most of them are just common sense. But giving them names helps focus on them, and to give you a vocabulary for talking about HCI.

  2. Goals expectation Intention Evaluation Action Interpretation Specification Execution Perception mental activity physical activity Figure 1: Donald Norman’s Model of Human Task Performance. Norman’s Model of Task Performance See Figure 1. Notes This model describes how people perform tasks. It basically says that, in performing a task, a human goes through these seven stages: • Establishing the goal to be achieved • Forming the intention to perform some action that will achieve the goal • Specifying the action • Actually executing the action • Perceiving the system state • Interpreting the state as perceived • Evaluating the system state with respect to the goal and intentions (Hence, it is also sometimes referred to as the “Seven Stages of Action”.) For example, suppose I’m editing a file in Emacs (as I am now), and I decide I 2

  3. want to save the file I’m working on. “Saving the file” is my goal. The state of the world at this point, at least the relevant part, the Emacs window, is shown in Figure 2. (Emacs actually calls this a frame , reserving the term window for the viewports onto file buffers that a frame may contain. While this is just at one level a peculiarity of Emacs terminology, at another level it makes the point that different programs and interface toolkits may use terms like “window” and “frame” with different meanings. So beware.) Then I form the intention to do something to save the file. Next I have to decide on (specify) what action to perform. Suppose I decide to click with the mouse on the colored floppy-disk icon in the Emacs toolbar. (How I make this decision we’ll consider later.) Then I actually have to go ahead and perform that action. That action has an effect on the world: the pattern of colors and shapes in the Emacs window changes. See Figure 3. I have to perceive those changes (at the lowest level of perception); and then interpret them: the floppy-disk icon has changed from colored to “greyed-out”, the * * at the left of the status bar has changed to - - , and the message Wrote /Users/ljk/371/New/lec/body-humanwork-in.tex has appeared in the minibuffer at the bottom. Lastly, I have to evaluate whether (any of) these indicate that my file has in fact been saved—that is, that the action I took did indeed lead to the achievement of my goal. My decision about what action to take will be based on a combination of what actions I can see that the interface offers and what I know about the interface (my mental model —see later). In general, for an unfamiliar interface I’ll have to depend mainly on what visible options the interface presents to me; for a familiar interface I can make more use of my specific knowledge of that interface. For Emacs I know that the key sequence “Ctrl- X Ctrl-S” also saves. (And will be more convenient since I won’t have to remove my hand from the keyboard—but this is a somewhat separate issue.) Note that this process can be multi-level: A top-level goal can spawn subgoals. For example, my goal might be to fit a note I’m writing onto one printed page (it’s slightly too long). I could achieve that top-level goal by achieving any one of the three subgoals: reword the text of the note, or change the font size, or change the page layout (e.g., narrower margins). In AI problem-solving terms, these subgoals are connected under an OR node, that is, achieving any one of the subgoals is sufficient to achieve the top- level goal. If I’m operating sequentially, then I can just try the subgoals one after another until one succeeds. In other situations, the subgoals might be connected under an AND node, that is, all the subgoals have to be achieved in order to achieve the top-level goal. 3

  4. Figure 2: Screengrab of Emacs window before toolbar save. This can happen at various stages. For example, if I’m not using a WYSIWYG (“What you see is what you get.”) system, then checking that my actions succeeded in achieving my goal (fitting the note on one page), may require setting up a subgoal of previewing or printing the note. Or in the file-saving example, if I’m not sure what the floppy-disk icon does, then I might set up the subgoal of finding out what it does, the action for which might be to move the mouse cursor over it to read the pop-up tool-tip. (Tool-tips probably originated with the early Macs as “balloon help”.) Gulf of Execution • How do users know what to do to achieve their goal? – What actions are available? – Which one does what I want? • E.g., How do I save the document I’m working on? • The smaller the gulf the better. Gulf of Evaluation 4

  5. Figure 3: Screengrab of Emacs window after toolbar save. What’s changed since Figure 2? What does it mean for the achievement of my goal? • How do users know ( evaluate ) whether the action they chose had the intended effect? – Easily visible and interpretable feedback on effect of actions • E.g., I clicked on the “save” menu item. How do I know that my document actually got saved? • The smaller the gulf the better. Notes The Gulf of Execution and Gulf of Evaluation are two aspects of users’ interaction with an interface. The Gulf of Execution is about the difficulty users have in deciding what to do, which action to try. The Gulf of Evaluation is about the difficulty that users have in deciding whether the action they took did in fact have the intended effect. Note that these two gulfs are not processes; they refer to the gap (in each case) between what the user wants and what the interface provides. On the execution side, an interface that makes visible what actions it provides and makes it easy for users to choose the correct action has a small Gulf of Execution—and that’s good. An interface for which it’s difficult for users to choose the actions to achieve their goals has a large Gulf of Execution—and 5

  6. that’s bad. On the evaluation side, an interface that gives users good feedback so that they can easily know whether the action they chose had the intended effect has a small Gulf of Evaluation—and that’s good. If users don’t get good feedback about what effect their action had, then that interface has a large Gulf of Evaluation—and that’s bad. Affordances • Visible properties of objects that suggest what can be done with them— signals of use • e.g., a suitable handle affords grasping • e.g., a suitable door-plate affords pushing • Norman: When simple things need an instruction manual, even a one- word manual, then design has failed. Notes As Norman used the term, and as it is commonly used today in HCI, an affordance is a visible property of some object that suggests how it can be used. For example, I want to pick up a suitcase. I look at it and see that it has a handle that looks suitable for grasping and picking the suitcase. The handle is an affordance for picking up the suitcase. For example, I come up to a door, which I want to open. The door has a push-plate on it, which suggests that I open this door by pushing. Figure 4 shows an example of what Norman was thinking about in the above quote. Here the door has a two-word instruction manual: “PUSH” on one side, “PULL” on the other. According to Norman this is a bad design. For a well-designed door, it should be obvious how to operate it, without the need for any additional written signs. While we mostly think of affordances in terms of “visibility” to sight, they could conceivably apply to other sense modalities. If I encountered the door in the dark, and reached out and felt that it had a push-plate, then that would still suggest that I could open the door by pushing, even though I couldn’t actually see it. The term affordance does have a history. Norman took it from J.J. Gib- son, who used it to refer to those things in an environment that could be used by an organism (whether visible or not). Norman changed the meaning to require visibility, which is how it’s generally used in HCI. As well as being used to refer to individual signals of use, the term affordance is often also 6

  7. Figure 4: A door with a two-word instruction manual. used for the abstract quality: you might say that an interface overall has “good affordance”. Constraints • Some constraint on the use of objects Norman’s Lego set construction example Physical blocks fit together in only certain ways Semantic windscreen goes in front of rider Cultural red light is tail light Logical last part has to go in last available position • Some overlap between the concepts, affordance and various kinds of constraints • Sometimes a matter of interpretation • But usually distinction is reasonably clear • Cultural factors often escape notice 7

Recommend


More recommend