categorical vs episodic memory for pitch accents in
play

CATEGORICAL VS. EPISODIC MEMORY FOR PITCH ACCENTS IN ENGLISH by - PowerPoint PPT Presentation

CATEGORICAL VS. EPISODIC MEMORY FOR PITCH ACCENTS IN ENGLISH by Amelia E. Kimball, Jennifer Cole, Gery Dell & Stefanie Shattuck-Hufnagel Presented by Ben Posner for the Seminar Exemplar Theorie by Prof. Dr. Bernd Mbius Categorical Memory


  1. CATEGORICAL VS. EPISODIC MEMORY FOR PITCH ACCENTS IN ENGLISH by Amelia E. Kimball, Jennifer Cole, Gery Dell & Stefanie Shattuck-Hufnagel Presented by Ben Posner for the Seminar Exemplar Theorie by Prof. Dr. Bernd Möbius

  2. Categorical Memory vs Episodic Memory

  3. Categorical Features • Stemming from Formal Phonology • Set of abstract representations • Governed by rules and constraints • Models variable error-laden input and output • Remembering is encoding and storing abstract representation • Phonological abstraction

  4. What is Stored in Memory • Each phonetic instantiation of a sound unit is mapped to an abstract category • Only the category of a unit is committed to memory • Different sounds of the same category are indistinguishable in memory

  5. Predictions? • Two distinct sounds mapped to the same category will be reported as the same • Only category is retrievable

  6. Episodic Memory • Exemplar Models • Encode and remember subcategorical phonetic detail

  7. What is Stored in Memory • Episodic memories with all details • All perceived acoustic information (to some extend) • Within Episodes even 'irrelevant' information is stored • Irrelevant to language understanding

  8. Prediction • Within category differences will be remembered • Even subtle differences can be perceived and later recalled

  9. Evidence • Phonological Abstraction • Exemplar Models

  10. Evidence • Phonological Abstraction • Exemplar Models

  11. Phonological Abstraction 1: Goldinger, 2007 • Perception task • Categorical perception of phonemes

  12. Goldinger, 2007 Method • Two sounds that may or may not vary in voice onset time • Participants often unable to detect differences when both sounds are within category • Short-lag memory task instead of perception task(Remembering the f irst sound for a brief amount of time) • Perception task vs. Short lag memory task

  13. Goldinger, 2007 Results • Participants quite bad at the task • Evidence that only category is encoded and retrievable

  14. 2: Stress deafness Dupoux, Knaus, Orzechowska, Weise (2008) • Speakers of languages with f ixed lexical stress • Participants do not hear a difference in unfamiliar words with differing stress patterns • In EEG studies differences are perceived but can't be recalled • Evidence that listeners do not remember irrelevant acoustic detail even though it is perceived

  15. Evidence • Phonological Abstraction • Exemplar Models

  16. Exemplar Models (1) McMurrey et al., 2002 • Eye-Fixation Study • Visual World Paradigm

  17. Visual World Paradigm (1) McMurrey et al., 2002 • What • Relies on Cooper (1974), Tanenhaus et al. (1995) • Gaze toward objects in the real world or a visual representation is measured during speech production/perception • How • Eye Tracking in a controlled visual workspace • Why • Natural, Unintrusive • Can be used on people who cannot write/read • Can measure online processing

  18. McMurrey et al., 2002 • Input Signal: Beach vs Peach • Responses: Categorical • But: Eye Fixations driven by subcategorical variation • Evidence that listeners are sensitive to within-category differences

  19. "At perceptual levels, acoustic information is encoded continuously, independent of phonological information" Toscano et al., 2010

  20. Exemplar Models 2: Goldinger, 1996 & Pufal et al., 2014 • Word recognition memory • Up to one week between tests • Task: Identifying whether a word was heard before • Easier with identical background noise (even unrelated noise, e.g. barking dogs) • Evidence for episodic memories of speech that include acoustic detail of within-category variation and linguistically irrelevant variation

  21. Contradictory Evidence • For categorical Perception • Short lag memory/Voice onset differences • Stress deafness • For subcategorical Phonetic Details • Recognition Memory Tasks • Priming Tasks • Do listeners encode subcategorical detail for speech? • → This study

  22. Intonational Pitch Accent • What is that? • Creating accents in words by shifting the pitch instead of other stress patterns (length, loudness, etc.) • In English pitch is part of stress patterns • In Japanese pitch is the sole form of stressing and distinguishes meaning • Sake ↓ ↗ ( 酒 ) = Alcoholic Beverage vs. Sake ↑↘︎ ( 鮭 ) = Salmon

  23. Why Pitch Accents? 3 Reasons

  24. Why Pitch Accents? 3 Reasons • Don't mark lexical contrasts • Lexical contrasts are categorical • Mark Information status distinctions related to focus and accessibility • Focus and Accessibility are gradient

  25. Why Pitch Accents? 3 Reasons • Might not be categorically perceived • Has been looked into but there is no strong evidence

  26. Why Pitch Accents? 3 Reasons • Correlation between Intensity, duration and/or f0 • But: No evidence which of these properties listeners pay attention in perceiving and interpreting pitch accents

  27. 2 Types of Variation • Phonological Variation (Presence vs. absence) • Variation in Phonetic Cues to Pitch accent (Duration, F0 peak values) • If pitch accent is encoded and remembered listeners will be sensitive to variation in status variation and phonetic cues • If not only accent status will matter

  28. Study • 6 Experiments • 2 sets of 3 Experiments • Listeners hearing two speech samples after delay or interference • Listeners have to judge whether both samples were identical • Different samples can differ in accent categorically (status) or with sub- categorical change in accent cues

  29. Stimuli • American English • Mostly voiced • "Beavers love building" • Twelve nouns from six sentences (E.g. Beavers and building from above) • Each sentence recorded with four accent patterns • 1&2, 1& ¬ 2, ¬ 1&2, ¬ 1& ¬ 2 • Accented words were synthesised to create a bigger difference (still within original category)

  30. Stimuli • Pitch corrected in Praat • Pitch peak corrected 25 Hz up or down • Duration of word corrected by up to 10% (using PSOLA in Praat) • Differences are detected at the same rate as presence/absence of pitch accent

  31. Participants • 193 total participants • native English speakers from the USA • Age 19 - 59 (mean of 31, standard deviation of 8.4) • Excluded participants who did not f inish or were bilingual • 30 participants per experiment

  32. Procedure • Amazon Mechanical Turk • 6 different Experiments

  33. Experiment 1, 2 and 3 AX Tasks • Two words with one second of silence between • Asked to answer whether recordings were "the exact same recording or different recordings" • Recordings were either the same (1/2 of trials) or differed in one of three ways

  34. Experiment 1 • Variation in Accent Status • Naturally produces accented recording and • Naturally produced unaccented recording of the same word

  35. Experiment 2 • Shortened and Lengthened Version of the Same Recording • Arti f icially shortened version against arti f icially lengthened version • Both resynthesised • No distinction between a natural vs. a resynthesised sample

  36. Experiment 3 • Lowered pitch peak and raised pitch peak • Arti f icially lowered pitch peak against raised pitch peak • Both resynthesised • No distinction between a natural vs. a resynthesised sample

  37. Experiment 4, 5 & 6 • Same as 1, 2 & 3 with delay and interference instead of silence • Four different words (exposure phase) • Followed by tone • Word from exposure phase (Test token) • Question whether test token was exactly the same as the exposure version • More dif f icult • Interference from other words, time delay, and Increased working memory load

  38. Results • For AX task: • Well above chance for all three contrasts • Experiment 1 = 77% • Experiment 2 = 85% • Experiment 3 = 75% • Comparable performance between status difference and duration/pitch changes • All three differences are equally easy to differentiate

  39. Results • With delay and interference • Still above chance • Comparable for Experiment 1 and 4 (Status) • Worse between 2, 3 and 5, 6 (duration and pitch)

  40. Results Accent Duration Pitch AX (Exp 1, 2, 3) 77 % 85 % 75 % Delay (Exp 4, 5, 6) 83 % 67 % 54 % Worse

  41. What does that mean? In Summary • Accent difference (status) is remembered after time lag and in presence of interference • Duration and pitch differences are detectable above chance, less accurately remembered

  42. What does that mean? • Group effects hold when analysed with mixed effect logit model with random slopes and intercepts to accound for individual variability • Individual performance in AX pitch task signi f icantly higher than SD of scored in AX duration task • → more variation from listener to listener in pitch task than in duration or accent task • holds despite excellent discrimiation (of pure tone differences of the same magnitude pitch in post-tests)

Recommend


More recommend