Mental Models CMSC 691R - Human-Robot Interaction March 7th, 2019 Luke Richards
What is a Mental Model? - What do you see? - What are some descriptive words you would use to describe this object?
What is a Mental Model? - “Mental models generally describe a person's representation of what the external world might be or how it might work” - Hwang 2005 - Internal Representation - Human ‘expectations’ of the world around them - In HRI, this is ‘projectioning’ this mental model onto the robot’s capabilities
What is a Mental Model? - What do you see? - What are some descriptive words you would use to describe this object?
Which would you ask to make you dinner?
Types of Mental Models Shared Grounds Human’s Mental Model Robot’s Mental Model
Why Consider Mental Models? - Usability - Efficiency of Interaction - Capability understanding - Information Transfer - Advantage of Previous Mental Models in Design - Familiar symbols - Physical morphology that relates to job - Emotional/cognitive support with nostalgic influences
Changing Someone’s Mental Model Hard - Hard or easy? - The proposed solution must offer significant value - Redefining what a phone is (iPhone) - Clashing mental models between designer and user Double-loop learning
Robot’s “Mental” Model - How? - Static programming - Dynamic: Machine Learning - For? - Environmental understanding - Emotional understanding - Various other tasks of comphrenedening the world around it
Paper #1: ‘ The role of mental model and shared grounds in human-robot interaction’ Jung-Hoon Hwang, KangWoo Lee, and Dong-Soo Kwon , 2005 - Overview of Mental Models in HRI - Multimodal Icon Language Interface - User Study
Multimodal Icon Language Interface
Experiment - 6 subjects with no experience operating a robot. Only one had experience programming in general - Phases: - Pre-interaction - Interaction - Post-interaction
Results - Measured the change of the user’s perceived difficulty of operating the robot - Interaction can lead to assumptions about the robot that aren’t true - Robot being able to move a table
Paper #1 Discussion - What did we like? - What did we dislike? - Thoughts on technical system? - Discussion: - Did the work achieve what it set out to do? - Did they measure enough using the metric(s)? - Would this concept be more effective in with technologies from 2019 opposed to 2005?
Paper #2 ‘ Situated Language Understanding with Human-like and Visualization-Based Transparency’ Leah Perlmutter, Eric Kernfeld and Maya Cakmak, 2016 - LUCIT System - Grounded Language System - Transparency - Video
Situated Language Understanding - Used Bayesian Network to model environment, language and gesture - Action Context model - Parameterization model - Gesture model - Language model
Types of Interaction - Human-like transparency - Speech - Poining - Gaze
Types of Media Used - Screen - Virtual Reality (VR)
Hypotheses ● H1 : Adding visualization-based transparency after natural transparency will positively impact communication. H2 : By improving the user’s mental model, visualization based ● transparency will improve communication even after it is removed. ● H3 : The medium in which visualizations are provided will not impact communication.
User Study - Mostly fluid interactions - No pointing/pointing - Changing transparency medium - Removing medium after in Phase 2
Qualitative Results “... I learned the way that the system sees my pointing. I found it ● was accurate on the right side of the room but it was difficult [..] on the left side of the room..” “Based on my observation from this round, I see that [the robot] has ● a much richer perception of the environment around her, and is probably much less controlled by a human during the experiment than I had initially thought.” “... understood in this round what went wrong when [it] did not ● understand me: for instance, mis-translating ‘dust’ as ‘does’ and thereby not recognizing the action word in the command.”
Results
Results
Paper #2 Discussion - What did we like? - What did we dislike? - Thoughts on technical system? - Discussion: - Did the work achieve what it set out to do? - Did they measure enough using the metric(s)? - When should a transparency method be present? - How does this work compare to Paper 1? 2005 to 2016? - Programming language design and HRI/HCI?
References ● https://fs.blog/mental-models/ ● https://healingartsalliance.org/member-articles/its-a-party-bring-the-fiber-peggy-smith-wellness/attachment/ap ple/ ● Hwang 2005 ‘The role of mental model and shared grounds in human-robot interaction’ ● Leah Perlmutter, Eric Kernfeld and Maya Cakmak, 2016 ‘Situated Language Understanding with Human-like and Visualization-Based Transparency’ ● http://peperfresh-mobilesecrets.blogspot.com/2009/01/what-is-pda-and-what-its-uses.html ● https://www.interaction-design.org/literature/article/a-very-useful-work-of-fiction-mental-models-in-design ● https://robots.nu/en/robot/Aibo ● https://robots.nu/es/robot/NAO6-Humanoid-robot ● https://spectrum.ieee.org/automaton/robotics/robotics-hardware/mit-soft-robotic-fish-explores-reefs-in-fiji ● https://en.wikipedia.org/wiki/Furby ● https://en.wikipedia.org/wiki/Mental_model#Mental_models_of_dynamics_systems:_mental_models_in_syste m_dynamics ● https://www.toptal.com/machine-learning/machine-learning-theory-an-introductory-primer
Recommend
More recommend