Stereovision and augmented reality for closed-loop control of grasping in hand prosthesis Markovic et al. (Germany) in Journal Neu. Eng. 2014 Presented by Kory Mathewson at BLINC Journal Club July 24 2015
Motor Info (feed-forward) Closed-loop Control Sensory Info (feedback) Inputs Planning Execution Complex (Low dimension) Control (high-level) (low-level) Tasks Learning Salient points: user focus on the task , information bandwidth , user burden
Motor Info (feed-forward) Sensory Info (feedback) EEG, ECG, foot movements, tongue, EOG, implantable neural electrodes and myoelectric sensors, EMG Multichannel surface electromyography Can we enrich artificial controller with extra information to allow autonomous decision making?
Can we enrich artificial controller with extra information to allow autonomous decision making? Adding perception with sensor fusion. Stereovision Automatically reshape grip pattern. Operational responsibility (cognitive load) shared between the system and the user.
Motor Info (feed-forward) Sensory Info (feedback) direct mechanical (vibrotactile, haptic), electrocutaneous stimulation, vibration motors, hybrid stimulation, invasive approaches, AR Augmented reality Artificial proprioception by projecting the prosthetic into the field of view.
Study Design 1) Compare fully vs semi-automatic control. 2) Evaluate user ability to share control. 3) Measure feasibility of utilizing AR feedback. Auto-AR 4 Semi-AR conditions SEMI-AR-RE Semi-Vis-RE 13 subjects 6 series 20 objects 1560 trials
Study Design 1) Compare fully vs semi-automatic control. 2) Evaluate user ability to share control. 3) Measure feasibility of utilizing AR feedback. Results 1) Semi-automatic control performed better. 2) User was able to share control quickly and effectively. 3) User was able to use AR feedback to correct mistakes of automatic controller.
Discussion “It is likely that trained subjects will learn to rely more on feed-forward control Agree? and use feedback only when necessary” Only used four grip patterns, how would we generalize? Does this reduce burden on the user? How can we measure burden on the user? How else could AR / stereovision be utilized as a feedback mechanism? Is this optimal integration of manual and automatic control loops?
Recommend
More recommend