a modular theory of pronouns and binding
play

A modular theory of pronouns (and binding) Simon Charlow (Rutgers) - PowerPoint PPT Presentation

A modular theory of pronouns (and binding) Simon Charlow (Rutgers) NYPLW September 25, 2017 1 Overview Today: a brief on the power of abstraction and modularity in semantic theorizing, with a focus on pronouns and the grammatical mechanisms


  1. For assignment-dependence The idea — almost embarrassing in its simplicity — is to just abstract out and modularize the core features of the standard account. In lieu of treating everything as trivially dependent on an assignment, invoke a function ρ which turns any x into a constant function from assignments into x : ρ : = λ x .λ g . x � �� � a → g → a Instead of taking on � � · � � wholesale, we’ll help ourselves to a function ⊛ which allows us to perform assignment-friendly function application on demand: ⊛ : = λ m .λ n .λ g . mg ( ng ) � �� � ( g → a → b ) → ( g → a ) → g → b 18

  2. Sample derivations λ g . spoke g 0 g → t λ g . g 0 λ n .λ g . spoke ( ng ) g → e ( g → e ) → g → t she 0 ⊛ λ g . spoke g → e → t ρ spoke e → t spoke 19

  3. Sample derivations λ g . spoke g 0 g → t λ g . g 0 λ n .λ g . spoke ( ng ) g → e ( g → e ) → g → t she 0 ⊛ λ g . spoke g → e → t ρ spoke e → t spoke 19

  4. Sample derivations λ g . spoke g 0 g → t λ g . g 0 λ n .λ g . spoke ( ng ) g → e ( g → e ) → g → t she 0 ⊛ λ g . spoke g → e → t ρ spoke e → t spoke 19

  5. Sample derivations λ g . spoke g 0 g → t λ g . g 0 λ n .λ g . spoke ( ng ) g → e ( g → e ) → g → t she 0 ⊛ λ g . spoke g → e → t ρ spoke e → t spoke 19

  6. Sample derivations λ g . saw g 0 j g → t λ g . spoke g 0 λ g . j λ n .λ g . saw g 0 ( ng ) g → t g → e ( g → e ) → g → t ρ ⊛ λ g . g 0 λ n .λ g . spoke ( ng ) j λ g . saw g 0 g → e ( g → e ) → g → t e g → e → t she 0 John ⊛ λ g . spoke λ n .λ g . saw ( ng ) λ g . g 0 g → e → t ( g → e ) → g → e → t g → e her 0 ρ ⊛ spoke λ g . saw e → t g → e → e → t spoke ρ saw e → e → t saw 19

  7. Sample derivations λ g . saw g 0 j g → t λ g . spoke g 0 λ g . j λ n .λ g . saw g 0 ( ng ) g → t g → e ( g → e ) → g → t ρ ⊛ λ g . g 0 λ n .λ g . spoke ( ng ) j λ g . saw g 0 g → e ( g → e ) → g → t e g → e → t she 0 John ⊛ λ g . spoke λ n .λ g . saw ( ng ) λ g . g 0 g → e → t ( g → e ) → g → e → t g → e her 0 ρ ⊛ spoke λ g . saw e → t g → e → e → t spoke ρ saw e → e → t saw 19

  8. Sample derivations λ g . saw g 0 j g → t λ g . spoke g 0 λ g . j λ n .λ g . saw g 0 ( ng ) g → t g → e ( g → e ) → g → t ρ ⊛ λ g . g 0 λ n .λ g . spoke ( ng ) j λ g . saw g 0 g → e ( g → e ) → g → t e g → e → t she 0 John ⊛ λ g . spoke λ n .λ g . saw ( ng ) λ g . g 0 g → e → t ( g → e ) → g → e → t g → e her 0 ρ ⊛ spoke λ g . saw e → t g → e → e → t spoke ρ saw e → e → t saw 19

  9. Sample derivations λ g . saw g 0 j g → t λ g . spoke g 0 λ g . j λ n .λ g . saw g 0 ( ng ) g → t g → e ( g → e ) → g → t ρ ⊛ λ g . g 0 λ n .λ g . spoke ( ng ) j λ g . saw g 0 g → e ( g → e ) → g → t e g → e → t she 0 John ⊛ λ g . spoke λ n .λ g . saw ( ng ) λ g . g 0 g → e → t ( g → e ) → g → e → t g → e her 0 ρ ⊛ spoke λ g . saw e → t g → e → e → t spoke ρ saw e → e → t saw 19

  10. Sample derivations λ g . saw g 0 j g → t λ g . spoke g 0 λ g . j λ n .λ g . saw g 0 ( ng ) g → t g → e ( g → e ) → g → t ρ ⊛ λ g . g 0 λ n .λ g . spoke ( ng ) j λ g . saw g 0 g → e ( g → e ) → g → t e g → e → t she 0 John ⊛ λ g . spoke λ n .λ g . saw ( ng ) λ g . g 0 g → e → t ( g → e ) → g → e → t g → e her 0 ρ ⊛ spoke λ g . saw e → t g → e → e → t spoke ρ saw e → e → t saw 19

  11. Sample derivations λ g . saw g 0 j g → t λ g . spoke g 0 λ g . j λ n .λ g . saw g 0 ( ng ) g → t g → e ( g → e ) → g → t ρ ⊛ λ g . g 0 λ n .λ g . spoke ( ng ) j λ g . saw g 0 g → e ( g → e ) → g → t e g → e → t she 0 John ⊛ λ g . spoke λ n .λ g . saw ( ng ) λ g . g 0 g → e → t ( g → e ) → g → e → t g → e her 0 ρ ⊛ spoke λ g . saw e → t g → e → e → t spoke ρ saw e → e → t saw 19

  12. Sample derivations λ g . saw g 0 j g → t λ g . spoke g 0 λ g . j λ n .λ g . saw g 0 ( ng ) g → t g → e ( g → e ) → g → t ρ ⊛ λ g . g 0 λ n .λ g . spoke ( ng ) j λ g . saw g 0 g → e ( g → e ) → g → t e g → e → t she 0 John ⊛ λ g . spoke λ n .λ g . saw ( ng ) λ g . g 0 g → e → t ( g → e ) → g → e → t g → e her 0 ρ ⊛ spoke λ g . saw e → t g → e → e → t spoke ρ saw e → e → t saw 19

  13. Basically 20

  14. Basically It looks like you’re trying to do semantics. Would you like help? 20

  15. Basically It looks like you’re trying to do semantics. Would you like help? � Give me a ρ 20

  16. Basically It looks like you’re trying to do semantics. Would you like help? � Give me a ρ � Give me a ⊛ 20

  17. Basically It looks like you’re trying to do semantics. Would you like help? � Give me a ρ � Give me a ⊛ � Don’t show me this tip again 20

  18. Basically It looks like you’re trying to do semantics. Would you like help? � Give me a ρ � ✓ Give me a ⊛ � Don’t show me this tip again 20

  19. Conceptual issues dissolved First, ρ allows stuff that’s not really assignment-dependent to be lexically so. Second, because the grammar doesn’t insist on composing meanings with � � · � � , abstraction can be defined directly (e.g., Sternefeld 1998, 2001, Kobele 2010): Λ i : = λ f .λ g .λ x . f g i → x � �� � ( g → a ) → g → b → a 21

  20. Λ i : = λ f .λ g .λ x . f g i → x λ g . leftb g → t λ g . b λ n .λ g . left ( ng ) g → e ( g → e ) → g → t ρ ⊛ b λ g .λ x . left x e g → e → t Bill Λ 0 λ g . left g 0 g → t . . . . . . . . . t 0 left subj. raising 22

  21. Λ i : = λ f .λ g .λ x . f g i → x λ g . leftb g → t λ g . b λ n .λ g . left ( ng ) g → e ( g → e ) → g → t ρ ⊛ b λ g .λ x . left x e g → e → t Bill Λ 0 λ g . left g 0 g → t . . . . . . . . . t 0 left subj. raising 22

  22. Λ i : = λ f .λ g .λ x . f g i → x λ g . leftb g → t λ g . b λ n .λ g . left ( ng ) g → e ( g → e ) → g → t ρ ⊛ b λ g .λ x . left x e g → e → t Bill Λ 0 λ g . left g 0 g → t . . . . . . . . . t 0 left subj. raising 22

  23. Λ i : = λ f .λ g .λ x . f g i → x λ g . leftb g → t λ g . b λ n .λ g . left ( ng ) g → e ( g → e ) → g → t ρ ⊛ b λ g .λ x . left x e g → e → t Bill Λ 0 λ g . left g 0 g → t . . . . . . . . . t 0 left subj. raising 22

  24. Λ i : = λ f .λ g .λ x . f g i → x λ g . leftb g → t λ g . b λ n .λ g . left ( ng ) g → e ( g → e ) → g → t ρ ⊛ b λ g .λ x . left x e g → e → t Bill Λ 0 λ g . left g 0 g → t . . . . . . . . . t 0 left subj. raising 22

  25. A familiar construct When we abstract out ρ and ⊛ in this way, we’re in the presence of something known to computer scientists and functional programmers as an applicative functor (McBride & Paterson 2008, Kiselyov 2015). You might also recognize ρ and ⊛ as the K and S combinators from Combinatory Logic (Curry & Feys 1958). 23

  26. Applicative functors An applicative functor is a type constructor F with two functions: ρ :: a → F a ⊛ :: F ( a → b ) → F a → F b Satisfying a few laws: Homomorphism Identity ρ f ⊛ ρ x = ρ ( f x ) ρ (λ x . x ) ⊛ v = v Interchange Composition ρ (λ f . f x ) ⊛ u = u ⊛ ρ x ρ ( ◦ ) ⊛ u ⊛ v ⊛ w = u ⊛ ( v ⊛ w ) Basically, these laws say that ⊛ should be a kind of fancy functional application, and ρ should be a trivial way to make something fancy. 24

  27. Generality Another example of an applicative functor, for sets: � � � � ρ x : = x m ⊛ n : = f x | f ∈ m , x ∈ n (See Charlow 2014, 2017 for more on this.) The technique is super general, and can be fruitfully applied (inter alia) to dynamics, presupposition, supplementation, (association with) focus, and scope: ρ x : = λ k . k x m ⊛ n : = λ k . m (λ f . n (λ x . k ( f x ))) (See Shan & Barker 2006, Barker & Shan 2008 for more on this.) 25

  28. Applicative functors compose F ( G b ) F ( G a ) → F ( G b ) F ( G a ) ⊛ F ( G a ) F ( G a → G b ) ρ G a F ( G ( a → b )) F ( G ( a → b )) → F ( G a → G b ) ρ ⊛ a F ( G ( a → b ) → G a → G b ) ρ ⊛ G ( a → b ) → G a → G b Whenever you have two applicative functors, you’re guaranteed to have two more! 26

  29. Getting higher-order 27

  30. What we have The applicative-functor approach to assignment sensitivity immediately dissolves the theoretical baggage associated with the standard account: ◮ ρ allows us to keep the lexicon maximally simple. ◮ ⊛ liberates us from � � · � � , allowing a categorematic treatment of abstraction. All in all, this seems like a nice grammatical interface for pronouns and binding. Extra resources are invoked only when they’re required for composition. 28

  31. What we don’t have However , it seems ρ and ⊛ are no help for reconstruction or paychecks (time permitting, I’ll question this point at the end, but let’s run with it for now). Intuitively, both phenomena are higher-order : the referent anaphorically retrieved by the paycheck pronoun or the topicalized expression’s trace is an ‘intension’, rather than an ‘extension’ (cf. Sternefeld 1998, 2001, Hardt 1999, Kennedy 2014). 1. John i deposited [his i paycheck] j , but Bill k spent it j . 2. [His i mom] j , every boy i likes t j . 29

  32. Anaphora to intensions What would it mean for a pronoun (or trace) to be anaphoric to an intension? 30

  33. Anaphora to intensions What would it mean for a pronoun (or trace) to be anaphoric to an intension? Perhaps: the value returned at an assignment (the anaphorically retrieved meaning) is still sensitive to an assignment (i.e., intensional). g → g → e 30

  34. Anaphora to intensions What would it mean for a pronoun (or trace) to be anaphoric to an intension? Perhaps: the value returned at an assignment (the anaphorically retrieved meaning) is still sensitive to an assignment (i.e., intensional). g → g → e Going whole hog, pronouns have a generalized, recursive type: pro :: = g → e | g → pro � �� � g → g → e , g → g → g → e , ... But, importantly, a unitary lexical semantics: � she 0 � : = λ g . g 0 . 30

  35. µ for higher-order pronouns Higher-order pronoun meanings require a higher-order combinator: µ : = λ m .λ g . mgg � �� � ( g → g → a ) → g → a (Aka the W combinator from Combinatory Logic.) µ takes an expression m that’s anaphoric to an intension, and obtains an extension by evaluating the anaphorically retrieved intension mg once more against g . In other words, it turns a higher-order pronoun meaning into a garden-variety one: µ (λ g . g 0 ) = λ g . g 0 g � �� � � �� � g → g → e g → e 31

  36. Recalling our binding derivation λ g . leftb g → t λ g . b λ n .λ g . left ( ng ) g → e ( g → e ) → g → t ρ ⊛ b λ g .λ x . left x e g → e → t Bill Λ 0 λ g . left g 0 g → t . . . . . . . . . t 0 left subj. raising 32

  37. λ g . spent ( g 1 g 0 → b ) b g → t λ n .λ g . spent ( g 1 g 0 → ng )( ng ) λ g . b g → e ( g → e ) → g → t ρ ρ ⊛ ⊛ λ g .λ x . spent ( g 1 g 0 → x ) x b e g → e → t Bill Bill Λ 0 Λ 0 λ g . spent ( g 1 g ) g 0 g → t λ g . g 0 λ n .λ g . spent ( g 1 g )( ng ) g → e ( g → e ) → g → t t 0 t 0 ⊛ ⊛ subj. raising subj. raising λ g . spent ( g 1 g ) g → e → t λ n .λ g . spent ( ng ) λ g . g 1 g ( g → e ) → g → e → t g → e ⊛ µ ⊛ µ λ g . spent λ g . g 1 g → e → e → t g → g → e it 1 it 1 ρ ρ spent e → e → t spent spent 33

  38. λ g . spent ( g 1 g 0 → b ) b g → t λ n .λ g . spent ( g 1 g 0 → ng )( ng ) λ g . b g → e ( g → e ) → g → t ρ ρ ⊛ ⊛ λ g .λ x . spent ( g 1 g 0 → x ) x b e g → e → t Bill Bill Λ 0 Λ 0 λ g . spent ( g 1 g ) g 0 g → t λ g . g 0 λ n .λ g . spent ( g 1 g )( ng ) g → e ( g → e ) → g → t t 0 t 0 ⊛ ⊛ subj. raising subj. raising λ g . spent ( g 1 g ) g → e → t λ n .λ g . spent ( ng ) λ g . g 1 g ( g → e ) → g → e → t g → e ⊛ µ ⊛ µ λ g . spent λ g . g 1 g → e → e → t g → g → e it 1 it 1 ρ ρ spent e → e → t spent spent 33

  39. λ g . spent ( g 1 g 0 → b ) b g → t λ n .λ g . spent ( g 1 g 0 → ng )( ng ) λ g . b g → e ( g → e ) → g → t ρ ρ ⊛ ⊛ λ g .λ x . spent ( g 1 g 0 → x ) x b e g → e → t Bill Bill Λ 0 Λ 0 λ g . spent ( g 1 g ) g 0 g → t λ g . g 0 λ n .λ g . spent ( g 1 g )( ng ) g → e ( g → e ) → g → t t 0 t 0 ⊛ ⊛ subj. raising subj. raising λ g . spent ( g 1 g ) g → e → t λ n .λ g . spent ( ng ) λ g . g 1 g ( g → e ) → g → e → t g → e ⊛ µ ⊛ µ λ g . spent λ g . g 1 g → e → e → t g → g → e it 1 it 1 ρ ρ spent e → e → t spent spent 33

  40. λ g . spent ( g 1 g 0 → b ) b g → t λ n .λ g . spent ( g 1 g 0 → ng )( ng ) λ g . b g → e ( g → e ) → g → t ρ ρ ⊛ ⊛ λ g .λ x . spent ( g 1 g 0 → x ) x b e g → e → t Bill Bill Λ 0 Λ 0 λ g . spent ( g 1 g ) g 0 g → t λ g . g 0 λ n .λ g . spent ( g 1 g )( ng ) g → e ( g → e ) → g → t t 0 t 0 ⊛ ⊛ subj. raising subj. raising λ g . spent ( g 1 g ) g → e → t λ n .λ g . spent ( ng ) λ g . g 1 g ( g → e ) → g → e → t g → e ⊛ µ ⊛ µ λ g . spent λ g . g 1 g → e → e → t g → g → e it 1 it 1 ρ ρ spent e → e → t spent spent 33

  41. λ g . spent ( g 1 g 0 → b ) b g → t λ n .λ g . spent ( g 1 g 0 → ng )( ng ) λ g . b g → e ( g → e ) → g → t ρ ρ ⊛ ⊛ λ g .λ x . spent ( g 1 g 0 → x ) x b e g → e → t Bill Bill Λ 0 Λ 0 λ g . spent ( g 1 g ) g 0 g → t λ g . g 0 λ n .λ g . spent ( g 1 g )( ng ) g → e ( g → e ) → g → t t 0 t 0 ⊛ ⊛ subj. raising subj. raising λ g . spent ( g 1 g ) g → e → t λ n .λ g . spent ( ng ) λ g . g 1 g ( g → e ) → g → e → t g → e ⊛ µ ⊛ µ λ g . spent λ g . g 1 g → e → e → t g → g → e it 1 it 1 ρ ρ spent e → e → t spent spent 33

  42. Taking stock The derived meaning is λ g . spent ( g 1 g 0 → b ) b . If the incoming assignment assigns 1 to λ g . paycheck g 0 (the intension of his 0 paycheck ), we’re home free. Aside from the type assigned to her 1 and the invocation of µ , this derivation is exactly the same as a normal case of pronominal binding. The secret sauce is generalizing the types of pronouns (but not their lexical semantics!), and invoking µ for higher-typed pronoun instantiations. 34

  43. Reconstruction works the same We can pull off a similar trick for reconstruction: treat the trace as higher-order, making it anaphoric to the intension of the topicalized expression. 1. [His i mom] j , every boy i likes t j . Use µ to make sure everything fits together, and we’re done. 35

  44. λ g . eb (λ x . likes ( mom x ) x ) g → t λ N .λ g . eb (λ x . likes ( N gg 0 → x ) x ) λ h .λ g . mom g 0 g → g → e ( g → g → e ) → g → t ρ ⊛ ρ ⊛ λ g .λ n . eb (λ x . likes ( ng 0 → x ) x ) λ g . mom g 0 g → e g → ( g → e ) → t his 0 mom his 0 mom Λ 1 Λ 1 λ g . eb (λ x . likes ( g 1 g 0 → x ) x ) g → t λ g .λ x . likes ( g 1 g 0 → x ) x λ n .λ g . eb ( ng ) ( g → e → t ) → g → t g → e → t . . . . . . . . . . . . ⊛ ⊛ . . . . . . t 0 likes t 1 t 0 likes t 1 λ g . eb g → ( e → t ) → t ρ ρ eb ( e → t ) → t every boy every boy subj. raising subj. raising topicalization topicalization 36

  45. λ g . eb (λ x . likes ( mom x ) x ) g → t λ N .λ g . eb (λ x . likes ( N gg 0 → x ) x ) λ h .λ g . mom g 0 g → g → e ( g → g → e ) → g → t ρ ⊛ ρ ⊛ λ g .λ n . eb (λ x . likes ( ng 0 → x ) x ) λ g . mom g 0 g → e g → ( g → e ) → t his 0 mom his 0 mom Λ 1 Λ 1 λ g . eb (λ x . likes ( g 1 g 0 → x ) x ) g → t λ g .λ x . likes ( g 1 g 0 → x ) x λ n .λ g . eb ( ng ) ( g → e → t ) → g → t g → e → t . . . . . . . . . . . . ⊛ ⊛ . . . . . . t 0 likes t 1 t 0 likes t 1 λ g . eb g → ( e → t ) → t ρ ρ eb ( e → t ) → t every boy every boy subj. raising subj. raising topicalization topicalization 36

  46. λ g . eb (λ x . likes ( mom x ) x ) g → t λ N .λ g . eb (λ x . likes ( N gg 0 → x ) x ) λ h .λ g . mom g 0 g → g → e ( g → g → e ) → g → t ρ ⊛ ρ ⊛ λ g .λ n . eb (λ x . likes ( ng 0 → x ) x ) λ g . mom g 0 g → e g → ( g → e ) → t his 0 mom his 0 mom Λ 1 Λ 1 λ g . eb (λ x . likes ( g 1 g 0 → x ) x ) g → t λ g .λ x . likes ( g 1 g 0 → x ) x λ n .λ g . eb ( ng ) ( g → e → t ) → g → t g → e → t . . . . . . . . . . . . ⊛ ⊛ . . . . . . t 0 likes t 1 t 0 likes t 1 λ g . eb g → ( e → t ) → t ρ ρ eb ( e → t ) → t every boy every boy subj. raising subj. raising topicalization topicalization 36

  47. λ g . eb (λ x . likes ( mom x ) x ) g → t λ N .λ g . eb (λ x . likes ( N gg 0 → x ) x ) λ h .λ g . mom g 0 g → g → e ( g → g → e ) → g → t ρ ⊛ ρ ⊛ λ g .λ n . eb (λ x . likes ( ng 0 → x ) x ) λ g . mom g 0 g → e g → ( g → e ) → t his 0 mom his 0 mom Λ 1 Λ 1 λ g . eb (λ x . likes ( g 1 g 0 → x ) x ) g → t λ g .λ x . likes ( g 1 g 0 → x ) x λ n .λ g . eb ( ng ) ( g → e → t ) → g → t g → e → t . . . . . . . . . . . . ⊛ ⊛ . . . . . . t 0 likes t 1 t 0 likes t 1 λ g . eb g → ( e → t ) → t ρ ρ eb ( e → t ) → t every boy every boy subj. raising subj. raising topicalization topicalization 36

  48. Another familiar construct Our grammatical interface for pronouns and binding has three pieces: ρ , ⊛ , and µ . ρ and ⊛ form an applicative functor. Does this suite of three combinators also correspond to something interesting? 37

  49. Another familiar construct Our grammatical interface for pronouns and binding has three pieces: ρ , ⊛ , and µ . ρ and ⊛ form an applicative functor. Does this suite of three combinators also correspond to something interesting? Yes, it’s a monad (Moggi 1989, Wadler 1992, 1995, Shan 2002, Giorgolo & Asudeh 2012, Charlow 2014, 2017, . . . ). 37

  50. Equivalence of definitions The usual presentation of monads is in terms of two functions η and ≫ = : η :: a → T a ≫ = :: T a → ( a → T b ) → T b (Satisfying certain laws, just like applicative functors.) For the present case, the monad of interest is known as the Environment or Reader monad. Its η is just the same as ρ . Its ≫ = is: ≫ = : = λ m .λ f .λ g . f ( mg ) g � �� � ( g → a ) → ( a → g → b ) → g → b 38

  51. Monads don’t compose There is an extremely sad fact about monads: unlike applicatives, they do not freely compose! If you have two monads, there is no guarantee you will have a third, and no general recipe for composing monads to yield new ones. So applicatives are easy to work with in isolation. You can be confident that they will play nicely with other applicative things in your grammar. Monads, not so much. 39

  52. Monads don’t compose There is an extremely sad fact about monads: unlike applicatives, they do not freely compose! If you have two monads, there is no guarantee you will have a third, and no general recipe for composing monads to yield new ones. So applicatives are easy to work with in isolation. You can be confident that they will play nicely with other applicative things in your grammar. Monads, not so much. The moral is this: if you have got an Applicative functor, that is good; if you have also got a Monad, that is even better! 39

  53. Monads don’t compose There is an extremely sad fact about monads: unlike applicatives, they do not freely compose! If you have two monads, there is no guarantee you will have a third, and no general recipe for composing monads to yield new ones. So applicatives are easy to work with in isolation. You can be confident that they will play nicely with other applicative things in your grammar. Monads, not so much. The moral is this: if you have got an Applicative functor, that is good; if you have also got a Monad, that is even better! And the dual of the moral is this: if you need a Monad, that is fine; if you need only an Applicative functor, that is even better! (McBride & Paterson 2008: 8) 39

  54. Variable-free semantics 40

  55. Pronouns as identity maps Jacobson proposes we stop thinking of pronouns as assignment-relative and index-oriented. Instead, she suggests we model pronouns as identity functions : she : = λ x . x � �� � e → e How should these compose with things like transitive verbs, which are looking for an individual, not a function from individuals to individuals? 41

  56. Pronouns as identity maps Jacobson proposes we stop thinking of pronouns as assignment-relative and index-oriented. Instead, she suggests we model pronouns as identity functions : she : = λ x . x � �� � e → e How should these compose with things like transitive verbs, which are looking for an individual, not a function from individuals to individuals? Of course, this is exactly the same problem that comes up when you introduce assignment-dependent meanings! And hence it admits the exact same solution. 41

  57. λ g . left g 0 λ x . left x g → t e → t λ g . g 0 λ n .λ g . left ( ng ) λ x . x λ n .λ x . left ( nx ) g → e ( g → e ) → g → t e → e ( e → e ) → e → t she 0 she ⊛ ⊛ λ g . left λ x . left g → e → t e → e → t ρ ρ left left e → t e → t left left In an important sense, then, the compositional apparatus underwriting variable-free composition is equivalent to that underwriting assignment-friendly composition! 42

  58. assignment; for example: 92 phrases that contain traces to be assignment-relative as well. For instance, a VP cussions throughout our system of rules. We must allow the denotations of larger The decision to relativize the denotations of traces to assignments has reper­ (8) If a is a trace, then, for any assignment a, [a]" = a. which we can formulate as follows: (7) exemplifjes a special case of a general rule for the interpretation of traces, "the denotation of a/ under a" (where a/ is a tree and a is an assignment). TIle general convention for reading this notation is as follows: Read "[a]"" as (7) [t]Texim. = Texas. abbreviate (6): indicate the assignment as a superscript on the brackets; for instance, (7) will elaborate than the simple [ . . . ] brackets we have used up to now. We will An appropriate notation to abbreviate such statements needs to be a little more (6) The denotation of "t" under the assignment Texas is Texas. A trace under a given assignment denotes the individual that constitutes that difgerent functions under different assigmnent functions; for instance: 5.2.2 V Relative Clauses, Variables, Variable Binding Let us abandon this line of approach. Here, then, is the dilemma: We would like the trace to denote an individual so that we can interpret the nodes above it, but we can't seem to fjnd a suitable individual. There is no easy way out within the confjnes of our current theoretical apparatus. It is time to explore the utility of a genuinely new theoretical construct, the variable. ariables of D (= De) ). Variables were invented precisely to be like ordinary referring phrases in the respects we want them to be, but suffjciently unlike them to avoid the puzzles we just ran up against. A variable denotes an individual, but only relative to a choice of an assignment of a value. What is a value assignment for a variable? The simplest defjnition for our present purposes is this: (5) Preliminary defjnition: An assignment is an individual (that is, an element whose object is a trace will not denote a fjxed function in D<e,,» but may denote Heim & Kratzer (1998: 92): � t � = λ x . x 43

  59. Small caveat This is not exactly how Jacobson’s system works. For one, it’s overlaid on a categorial syntax. For another, Jacobson’s key combinator is Z : Z : = λ f .λ m .λ x . f ( mx ) x Does this remind you of anything? 44

  60. Small caveat This is not exactly how Jacobson’s system works. For one, it’s overlaid on a categorial syntax. For another, Jacobson’s key combinator is Z : Z : = λ f .λ m .λ x . f ( mx ) x Does this remind you of anything? It’s (a slightly shuffled version of) the ≫ = operation from the Reader monad! ≫ = : = λ m .λ f .λ g . f ( mg ) g 44

  61. Multiple pronouns There is an important difference between assignments and individuals as reference-fixing devices. Assignments are data structures that can in principle value every free pronoun you need. But an individual can only value co-valued pronouns! 1. She saw her. So a variable-free treatment of cases like these will inevitably give you something like the following (I will spare you the details): λ x .λ y . saw y x 45

  62. Assignments, and “variables”, on demand | a → b → c | = | c | | a |·| b | | ( a × b ) → c | ( | c | | b | ) | a | = 1 Witness the curry/uncurry isomorphisms: curry f : = λ x .λ y . f ( x , y ) uncurry f : = λ( x , y ). f x y In other words, by (iteratively) uncurrying a variable-free proposition, you end up with a dependence on a sequence (tuple) of things. Essentially, an assignment. uncurry (λ x .λ y . saw y x ) = λ( x , y ). saw y x = λ p . saw p 1 p 0 Obversely, by iteratively currying a sequence(/tuple)-dependent proposition, you end up with a higher-order function. Essentially, a variable-free meaning. curry (λ p . saw p 1 p 0 ) = curry (λ( x , y ). saw y x ) = λ x .λ y . saw y x 46

  63. Variable-free semantics? So variable-free semantics (can) have the same combinatorics as the variable-full semantics. This is no great surprise: they’re both about compositionally dealing with “incomplete” meanings. Moreover, under the curry/uncurry isomorphisms, a variable-free proposition is equivalent to (something) like an assignment dependent proposition. Let’s call the whole thing off? 47

  64. Back to applicatives 48

  65. A bit o’ type theory What is the type of an assignment function? 49

  66. A bit o’ type theory What is the type of an assignment function? Standardly, g :: = N → e . 49

  67. A bit o’ type theory What is the type of an assignment function? Standardly, g :: = N → e . But we want assignment functions to harbor values of all sorts of types , for binding reconstruction and paychecks, cross-categorial topicalization, scope reconstruction. 49

  68. A bit o’ type theory What is the type of an assignment function? Standardly, g :: = N → e . But we want assignment functions to harbor values of all sorts of types , for binding reconstruction and paychecks, cross-categorial topicalization, scope reconstruction. Muskens (1995) cautions against trying to pack too much into our assignments: [ AX1 ] ∀ g , n , x α : ∃ h : g [ n ] h ∧ h n = x for all α ∈ Θ 49

  69. A bit o’ type theory What is the type of an assignment function? Standardly, g :: = N → e . But we want assignment functions to harbor values of all sorts of types , for binding reconstruction and paychecks, cross-categorial topicalization, scope reconstruction. Muskens (1995) cautions against trying to pack too much into our assignments: [ AX1 ] ∀ g , n , x α : ∃ h : g [ n ] h ∧ h n = x for all α ∈ Θ If, say, g → e ∈ Θ , this ends up paradoxical! AX1 requires there to be as many assignments as there are functions from assignments to individuals: | g | � | g → e | . 49

  70. A hierarchy of assignments? We might try parametrizing assignments by the types of things they harbor: g a :: = N → a (An a -assignment is a function from indices into inhabitants of a .) This is no longer paradoxical: we have a hierarchy of assignments, much like we have a hierarchy of types. 50

Recommend


More recommend