Schematic view of the MNS1 model Posterior cortex Frontal cortex Role of PFC in Neural network models of the mirror neuron action selection not included system Igor Farkaš Centre for Cognitive Science DAI FMPI Comenius University Grounded cognition course, 2016 (Oztop & Arbib, 2002) Neural network model (MNS1) MNS2 model (Bonaiuto, Rosta, Arbib, 2007) (Oztop & Arbib, 2002) canonical macaque brain ● 2-layer perceptron for trajectory recognition ● error BP learning used mirror ● artificially preprocessed inp. Input features extracted (7dim): Input trajectory converted to a ● audio-visual motor neurons added (characteristic sounds of actions) spatial pattern ● recurrent architecture (with BPTT training) (210-dim) to be ● ability of MN to respond for a recently visible but currently hidden object – classified enabled by working memory and dynamic remapping ● Neural correlates of MN congruence identified at hidden layer
RNNPB model RNNPB features (Tani, Ito, Sugita, 2004) dynamic systems approach appears is appealing – continuous state and ● action spaces distributed representations lend themselves for generalization (cf. Devlaminck et ● Robot arm al, 2009) Joint angle PB units connect execution and observation modes ● User hand motor loop to be interpreted as premotor (rather than motor) activity positions ● But: artificial training scheme used: mapping b/w user's and robot's ● proprioceptive information exploited → difficult to account for invariance in observing a motor act Framework extended by link to ● language (command understanding) (Tani & Sugita, 2005) ● ● Parametric Bias nodes (~ bifurcation parameter in DSS) ~ mirror neurons “authentic” selves (Tani, 2009) ● Goal of learning = self-organized mapping b/w PB nodes and behavioral spatio-temporal ● patterns. ● NN: I/O=12, hid=40, ctx=30, PB=4, back-propagation through time (Rumelhart et al, 1986) ● PB learning inspired by Miikkulainen (1999) Forward and inverse models Internal models in motor control Feedback control: (visual target,act) → next visual state ● Forward model ● predicts the next state of a dynamic system given the current action and controller current state. ● causal relationship ● unique (easy to learn) ● Inverse model ● predicts the action required to move the Feedforward control: system from the current state to a desired future state ● anti-causal relationship (current vis. state, target) → act ● (mostly) not unique (difficult to learn) ● forward model - crucial for biological ● brain most probably exploits internal models systems (due to visual loop delay) (Kawato, 1999)
Link b/w MNs and forward models Visual feedback control mechanism (Oztop, Wolpert, Kawato, 2005) 1. visually-guided reach 2. action observation ~ STS-PF-F5 ~ inverse model 3. execution of imitation action ~ F5-PF-STS ~ forward model - cerebellum might also be involved, though ● mathematical model (Miall, 2003) Mental state inference via visual feedback MSI features (Oztop, Wolpert, Kawato, 2005) 3 action types to be distinguished (predict before the sequence end) ● mathematical model used ● feature extraction in parietal cortex (control variables) ● object-centered frame of reference used ● no need for any coordinate transformation? ● Mental state search ● for discrete state (mental) space – exhaustive ● for continuous state (mental) space – stochastic gradient descent ● premotor cortex ~ forward model (MN) ● analogy to mental imagery ●
MNS2 model extension Features of MNS2 extended model (Bonaiuto & Arbib, 2010) New role for MNs suggested: monitoring the execution of one's own actions ● (as distinct from recognizing observed action) High-level approach: action schemas modeled (Arbib, 1981) ● Preconditions – action – effects ● Concept of reinforcement learning exploited ● TD learning used (full observability assumed) ● Optimal action chosen by WTA: with max(priority), where ● Priority = desirability x executability ● Account for rapid reorganization (of the motor program) after lesion (cats) ● (Alstermark et al, 1981) Sketch of our MNS model Open issues in MNS modeling View dependent data used, 4 Issue of reference frames (RFr) ● ● visual perspectives, 1 motor egocentric, allocentric (observed agent-, or object-centered), absolute ● visual inputs from right camera ● In primates multiple RFr used: LIP (retinotopical), VIP (also head-centered), ● 16 DoF in iCub’s right arm F5 (hand-centered) ● bidirectional communication for generation of movement vector, hand and target must be in common RFr ● ● b/w visual and motor systems Does using object-centered RFr alleviate the problem? ● assumed (via BAL algorithm) Selective attention can operate on allocentric RFr (Frischen et al, 2009) ● Self-organizing maps (SOMs, ● Kohonen, 1990) used as Is the need for RFr transformation task-dependent? ● representations If necessary, how to achieve positional and rotational invariance? ● Where do MN come from? ● adaptation (Arbib,...) vs associative hypotheses (Heyes) ● How to model acquisition of mirror skills? ● (Rebrová, Pecháč, Farkaš, ICDL 2013)
Recommend
More recommend