signal description process or gibbs i general introduction
play

Signal description: Process or Gibbs? I. General introduction - PowerPoint PPT Presentation

Introduction g -measures Gibbs measures Signal description: Process or Gibbs? I. General introduction Contributors: S. Berghout (Leiden) A. van Enter (Groningen) S. Gallo (S ao Carlos), G. Maillard (Aix-Marseille), E. Verbitskiy (Leiden)


  1. Introduction g -measures Gibbs measures Gibbs approach Quasilocal measures A specification γ is quasilocal if ∀ ǫ > 0 ∃ n, m ≥ 0 � ω m � ω { 0 } c � < ǫ � � � � � � �� � γ ω 0 − n σ [ n,m ] c − γ ω 0 (2) for every σ, ω ◮ (2) is equivalent to γ ( ω 0 | · ) continuous in product topology ◮ Gibbs specifications are, in addition, strongly non-null A probability measure µ is a quasilocal ( Gibbs ) measure if it is consistent with some quasilocal (Gibbs) specification Signal µ thought as non-causal or with anticipation

  2. Introduction g -measures Gibbs measures Gibbs approach Quasilocal measures A specification γ is quasilocal if ∀ ǫ > 0 ∃ n, m ≥ 0 � ω m � ω { 0 } c � < ǫ � � � � � � �� � γ ω 0 − n σ [ n,m ] c − γ ω 0 (2) for every σ, ω ◮ (2) is equivalent to γ ( ω 0 | · ) continuous in product topology ◮ Gibbs specifications are, in addition, strongly non-null A probability measure µ is a quasilocal ( Gibbs ) measure if it is consistent with some quasilocal (Gibbs) specification Signal µ thought as non-causal or with anticipation

  3. Introduction g -measures Gibbs measures Gibbs approach Quasilocal measures A specification γ is quasilocal if ∀ ǫ > 0 ∃ n, m ≥ 0 � ω m � ω { 0 } c � < ǫ � � � � � � �� � γ ω 0 − n σ [ n,m ] c − γ ω 0 (2) for every σ, ω ◮ (2) is equivalent to γ ( ω 0 | · ) continuous in product topology ◮ Gibbs specifications are, in addition, strongly non-null A probability measure µ is a quasilocal ( Gibbs ) measure if it is consistent with some quasilocal (Gibbs) specification Signal µ thought as non-causal or with anticipation

  4. Introduction g -measures Gibbs measures Gibbs approach Quasilocal measures A specification γ is quasilocal if ∀ ǫ > 0 ∃ n, m ≥ 0 � ω m � ω { 0 } c � < ǫ � � � � � � �� � γ ω 0 − n σ [ n,m ] c − γ ω 0 (2) for every σ, ω ◮ (2) is equivalent to γ ( ω 0 | · ) continuous in product topology ◮ Gibbs specifications are, in addition, strongly non-null A probability measure µ is a quasilocal ( Gibbs ) measure if it is consistent with some quasilocal (Gibbs) specification Signal µ thought as non-causal or with anticipation

  5. Introduction g -measures Gibbs measures Comparison Questions, questions Signals best described as processes or as Gibbs? Both setups give complementary information: ◮ Processes: ergodicity, coupling, renewal, perfect simulation ◮ Fields: Gibbs theory Are these setups mathematically equivalent? Is every regular g -measure Gibbs and viceversa? What is more efficient: One or two-side conditioning? Efficiency vs interpretation?

  6. Introduction g -measures Gibbs measures Comparison Questions, questions Signals best described as processes or as Gibbs? Both setups give complementary information: ◮ Processes: ergodicity, coupling, renewal, perfect simulation ◮ Fields: Gibbs theory Are these setups mathematically equivalent? Is every regular g -measure Gibbs and viceversa? What is more efficient: One or two-side conditioning? Efficiency vs interpretation?

  7. Introduction g -measures Gibbs measures Comparison Questions, questions Signals best described as processes or as Gibbs? Both setups give complementary information: ◮ Processes: ergodicity, coupling, renewal, perfect simulation ◮ Fields: Gibbs theory Are these setups mathematically equivalent? Is every regular g -measure Gibbs and viceversa? What is more efficient: One or two-side conditioning? Efficiency vs interpretation?

  8. Introduction g -measures Gibbs measures Comparison Questions, questions Signals best described as processes or as Gibbs? Both setups give complementary information: ◮ Processes: ergodicity, coupling, renewal, perfect simulation ◮ Fields: Gibbs theory Are these setups mathematically equivalent? Is every regular g -measure Gibbs and viceversa? What is more efficient: One or two-side conditioning? Efficiency vs interpretation?

  9. Introduction g -measures Gibbs measures Comparison Questions, questions Signals best described as processes or as Gibbs? Both setups give complementary information: ◮ Processes: ergodicity, coupling, renewal, perfect simulation ◮ Fields: Gibbs theory Are these setups mathematically equivalent? Is every regular g -measure Gibbs and viceversa? What is more efficient: One or two-side conditioning? Efficiency vs interpretation?

  10. Introduction g -measures Gibbs measures History Prehistory ◮ Onicescu-Mihoc (1935): chains with complete connections ◮ Existence of limit measures in non-nul cases ◮ → random systems with complete connections (book by Iosifescu and Grigorescu, Cambridge 1990) ◮ Doeblin-Fortet (1937): ◮ Taxonomy: A or B, dep. on continuity and non-nullness ◮ Existence of invariant measures ◮ Suggested: uniqueness of invariant measures (coupling!). Completed by Iosifescu (1992) ◮ Harris (1955): chains of infinite order ◮ Framework of D -ary expansions ◮ Weaker uniqueness condition ◮ Cut-and-paste coupling

  11. Introduction g -measures Gibbs measures History Prehistory ◮ Onicescu-Mihoc (1935): chains with complete connections ◮ Existence of limit measures in non-nul cases ◮ → random systems with complete connections (book by Iosifescu and Grigorescu, Cambridge 1990) ◮ Doeblin-Fortet (1937): ◮ Taxonomy: A or B, dep. on continuity and non-nullness ◮ Existence of invariant measures ◮ Suggested: uniqueness of invariant measures (coupling!). Completed by Iosifescu (1992) ◮ Harris (1955): chains of infinite order ◮ Framework of D -ary expansions ◮ Weaker uniqueness condition ◮ Cut-and-paste coupling

  12. Introduction g -measures Gibbs measures History Prehistory ◮ Onicescu-Mihoc (1935): chains with complete connections ◮ Existence of limit measures in non-nul cases ◮ → random systems with complete connections (book by Iosifescu and Grigorescu, Cambridge 1990) ◮ Doeblin-Fortet (1937): ◮ Taxonomy: A or B, dep. on continuity and non-nullness ◮ Existence of invariant measures ◮ Suggested: uniqueness of invariant measures (coupling!). Completed by Iosifescu (1992) ◮ Harris (1955): chains of infinite order ◮ Framework of D -ary expansions ◮ Weaker uniqueness condition ◮ Cut-and-paste coupling

  13. Introduction g -measures Gibbs measures History More recent history ◮ Keane (1972): g -measures ( g -functions), existence and uniqueness ◮ Ledrapier (1974): variational principle ◮ Walters (1975): relation with transfer operator theory ◮ Lalley (1986): list processes , regeneration, uniqueness ◮ Berbee (1987): uniqueness ◮ Kalikow (1990): ◮ random Markov processes ◮ uniform martingales ◮ Berger, Bramson, Bressaud, Comets, Dooley, F, Ferrari, Galves, Grigorescu, Hoffman, Hulse, Iosifescu, Johansson, Lacroix, Maillard, ¨ Oberg, Pollicott, Quas, Stanflo, Sidoravicius, Theodorescu, . . .

  14. Introduction g -measures Gibbs measures History More recent history ◮ Keane (1972): g -measures ( g -functions), existence and uniqueness ◮ Ledrapier (1974): variational principle ◮ Walters (1975): relation with transfer operator theory ◮ Lalley (1986): list processes , regeneration, uniqueness ◮ Berbee (1987): uniqueness ◮ Kalikow (1990): ◮ random Markov processes ◮ uniform martingales ◮ Berger, Bramson, Bressaud, Comets, Dooley, F, Ferrari, Galves, Grigorescu, Hoffman, Hulse, Iosifescu, Johansson, Lacroix, Maillard, ¨ Oberg, Pollicott, Quas, Stanflo, Sidoravicius, Theodorescu, . . .

  15. Introduction g -measures Gibbs measures History More recent history ◮ Keane (1972): g -measures ( g -functions), existence and uniqueness ◮ Ledrapier (1974): variational principle ◮ Walters (1975): relation with transfer operator theory ◮ Lalley (1986): list processes , regeneration, uniqueness ◮ Berbee (1987): uniqueness ◮ Kalikow (1990): ◮ random Markov processes ◮ uniform martingales ◮ Berger, Bramson, Bressaud, Comets, Dooley, F, Ferrari, Galves, Grigorescu, Hoffman, Hulse, Iosifescu, Johansson, Lacroix, Maillard, ¨ Oberg, Pollicott, Quas, Stanflo, Sidoravicius, Theodorescu, . . .

  16. Introduction g -measures Gibbs measures History More recent history ◮ Keane (1972): g -measures ( g -functions), existence and uniqueness ◮ Ledrapier (1974): variational principle ◮ Walters (1975): relation with transfer operator theory ◮ Lalley (1986): list processes , regeneration, uniqueness ◮ Berbee (1987): uniqueness ◮ Kalikow (1990): ◮ random Markov processes ◮ uniform martingales ◮ Berger, Bramson, Bressaud, Comets, Dooley, F, Ferrari, Galves, Grigorescu, Hoffman, Hulse, Iosifescu, Johansson, Lacroix, Maillard, ¨ Oberg, Pollicott, Quas, Stanflo, Sidoravicius, Theodorescu, . . .

  17. Introduction g -measures Gibbs measures History More recent history ◮ Keane (1972): g -measures ( g -functions), existence and uniqueness ◮ Ledrapier (1974): variational principle ◮ Walters (1975): relation with transfer operator theory ◮ Lalley (1986): list processes , regeneration, uniqueness ◮ Berbee (1987): uniqueness ◮ Kalikow (1990): ◮ random Markov processes ◮ uniform martingales ◮ Berger, Bramson, Bressaud, Comets, Dooley, F, Ferrari, Galves, Grigorescu, Hoffman, Hulse, Iosifescu, Johansson, Lacroix, Maillard, ¨ Oberg, Pollicott, Quas, Stanflo, Sidoravicius, Theodorescu, . . .

  18. Introduction g -measures Gibbs measures History More recent history ◮ Keane (1972): g -measures ( g -functions), existence and uniqueness ◮ Ledrapier (1974): variational principle ◮ Walters (1975): relation with transfer operator theory ◮ Lalley (1986): list processes , regeneration, uniqueness ◮ Berbee (1987): uniqueness ◮ Kalikow (1990): ◮ random Markov processes ◮ uniform martingales ◮ Berger, Bramson, Bressaud, Comets, Dooley, F, Ferrari, Galves, Grigorescu, Hoffman, Hulse, Iosifescu, Johansson, Lacroix, Maillard, ¨ Oberg, Pollicott, Quas, Stanflo, Sidoravicius, Theodorescu, . . .

  19. Introduction g -measures Gibbs measures History More recent history ◮ Keane (1972): g -measures ( g -functions), existence and uniqueness ◮ Ledrapier (1974): variational principle ◮ Walters (1975): relation with transfer operator theory ◮ Lalley (1986): list processes , regeneration, uniqueness ◮ Berbee (1987): uniqueness ◮ Kalikow (1990): ◮ random Markov processes ◮ uniform martingales ◮ Berger, Bramson, Bressaud, Comets, Dooley, F, Ferrari, Galves, Grigorescu, Hoffman, Hulse, Iosifescu, Johansson, Lacroix, Maillard, ¨ Oberg, Pollicott, Quas, Stanflo, Sidoravicius, Theodorescu, . . .

  20. Introduction g -measures Gibbs measures Differences with Markov Differences with Markov: Invariance ◮ Invariant measures: on space of trajectories (not just on A ) � � y � � � µ ( x 0 ) = g x 0 µ ( y ) y � � x − 1 µ ( dx − 1 � � � − → µ ( x 0 ) = g x 0 −∞ ) −∞ X − 1 −∞ = x − 1 ◮ Conditioning is over measure zero events: � � −∞ ◮ Importance of “ µ -almost surely” ◮ Properties must be essential = survive measure-zero changes

  21. Introduction g -measures Gibbs measures Differences with Markov Differences with Markov: Invariance ◮ Invariant measures: on space of trajectories (not just on A ) � � y � � � µ ( x 0 ) = g x 0 µ ( y ) y � � x − 1 µ ( dx − 1 � � � − → µ ( x 0 ) = g x 0 −∞ ) −∞ X − 1 −∞ = x − 1 ◮ Conditioning is over measure zero events: � � −∞ ◮ Importance of “ µ -almost surely” ◮ Properties must be essential = survive measure-zero changes

  22. Introduction g -measures Gibbs measures Differences with Markov Differences with Markov: Phase diagrams There may be several invariant measures ◮ Not due to lack of ergodicity (non-null transitions) ◮ Different histories can lead to different invariant measures ◮ Analogous to statistical mechanics: Many invariant measures = 1st order phase transitions Issues are, then, similar to those of stat mech: ◮ How many invariant measures? (= phase diagrams) ◮ Properties of measures? (mixing, extremality, ergodicity) ◮ Uniqueness criteria ◮ Simulation?

  23. Introduction g -measures Gibbs measures Differences with Markov Differences with Markov: Phase diagrams There may be several invariant measures ◮ Not due to lack of ergodicity (non-null transitions) ◮ Different histories can lead to different invariant measures ◮ Analogous to statistical mechanics: Many invariant measures = 1st order phase transitions Issues are, then, similar to those of stat mech: ◮ How many invariant measures? (= phase diagrams) ◮ Properties of measures? (mixing, extremality, ergodicity) ◮ Uniqueness criteria ◮ Simulation?

  24. Introduction g -measures Gibbs measures Differences with Markov Differences with Markov: Phase diagrams There may be several invariant measures ◮ Not due to lack of ergodicity (non-null transitions) ◮ Different histories can lead to different invariant measures ◮ Analogous to statistical mechanics: Many invariant measures = 1st order phase transitions Issues are, then, similar to those of stat mech: ◮ How many invariant measures? (= phase diagrams) ◮ Properties of measures? (mixing, extremality, ergodicity) ◮ Uniqueness criteria ◮ Simulation?

  25. Introduction g -measures Gibbs measures Differences with Markov Differences with Markov: Phase diagrams There may be several invariant measures ◮ Not due to lack of ergodicity (non-null transitions) ◮ Different histories can lead to different invariant measures ◮ Analogous to statistical mechanics: Many invariant measures = 1st order phase transitions Issues are, then, similar to those of stat mech: ◮ How many invariant measures? (= phase diagrams) ◮ Properties of measures? (mixing, extremality, ergodicity) ◮ Uniqueness criteria ◮ Simulation?

  26. Introduction g -measures Gibbs measures Differences with Markov Differences with Markov: Phase diagrams There may be several invariant measures ◮ Not due to lack of ergodicity (non-null transitions) ◮ Different histories can lead to different invariant measures ◮ Analogous to statistical mechanics: Many invariant measures = 1st order phase transitions Issues are, then, similar to those of stat mech: ◮ How many invariant measures? (= phase diagrams) ◮ Properties of measures? (mixing, extremality, ergodicity) ◮ Uniqueness criteria ◮ Simulation?

  27. Introduction g -measures Gibbs measures Differences with Markov Differences with Markov: Phase diagrams There may be several invariant measures ◮ Not due to lack of ergodicity (non-null transitions) ◮ Different histories can lead to different invariant measures ◮ Analogous to statistical mechanics: Many invariant measures = 1st order phase transitions Issues are, then, similar to those of stat mech: ◮ How many invariant measures? (= phase diagrams) ◮ Properties of measures? (mixing, extremality, ergodicity) ◮ Uniqueness criteria ◮ Simulation?

  28. Introduction g -measures Gibbs measures Differences with Markov Differences with Markov: Phase diagrams There may be several invariant measures ◮ Not due to lack of ergodicity (non-null transitions) ◮ Different histories can lead to different invariant measures ◮ Analogous to statistical mechanics: Many invariant measures = 1st order phase transitions Issues are, then, similar to those of stat mech: ◮ How many invariant measures? (= phase diagrams) ◮ Properties of measures? (mixing, extremality, ergodicity) ◮ Uniqueness criteria ◮ Simulation?

  29. Introduction g -measures Gibbs measures Differences with Markov Differences with Markov: Phase diagrams There may be several invariant measures ◮ Not due to lack of ergodicity (non-null transitions) ◮ Different histories can lead to different invariant measures ◮ Analogous to statistical mechanics: Many invariant measures = 1st order phase transitions Issues are, then, similar to those of stat mech: ◮ How many invariant measures? (= phase diagrams) ◮ Properties of measures? (mixing, extremality, ergodicity) ◮ Uniqueness criteria ◮ Simulation?

  30. Introduction g -measures Gibbs measures Formal definitions Transition probabilities Basic structure: ◮ Space A Z with product σ -algebra F (and product topo) ◮ For Λ ⊂ Z , F Λ = { events depending on ω Λ } ⊂ F Definition (i) A family of transition probabilities is a measurable function � · : A × A n − 1 � � � g · −∞ − → [0 , 1] � x − 1 � � � such that � x 0 ∈A g x 0 = 1 −∞ � · � � � (ii) µ is a process consistent with g · if � � y − 1 � � � µ ( { x 0 } ) = g x 0 µ ( dy ) −∞

  31. Introduction g -measures Gibbs measures Formal definitions Transition probabilities Basic structure: ◮ Space A Z with product σ -algebra F (and product topo) ◮ For Λ ⊂ Z , F Λ = { events depending on ω Λ } ⊂ F Definition (i) A family of transition probabilities is a measurable function � · : A × A n − 1 � � � g · −∞ − → [0 , 1] � x − 1 � � � such that � x 0 ∈A g x 0 = 1 −∞ � · � � � (ii) µ is a process consistent with g · if � � y − 1 � � � µ ( { x 0 } ) = g x 0 µ ( dy ) −∞

  32. Introduction g -measures Gibbs measures Formal definitions Transition probabilities Basic structure: ◮ Space A Z with product σ -algebra F (and product topo) ◮ For Λ ⊂ Z , F Λ = { events depending on ω Λ } ⊂ F Definition (i) A family of transition probabilities is a measurable function � · : A × A n − 1 � � � g · −∞ − → [0 , 1] � x − 1 � � � such that � x 0 ∈A g x 0 = 1 −∞ � · � � � (ii) µ is a process consistent with g · if � � y − 1 � � � µ ( { x 0 } ) = g x 0 µ ( dy ) −∞

  33. Introduction g -measures Gibbs measures General results General results (no hypotheses on g ) Let ◮ G ( g ) = � � µ consistent with g ◮ F −∞ := � k ∈ Z F ( −∞ ,k ] ( tail σ -algebra ) Theorem (a) G ( g ) is a convex set (b) µ is extreme in G ( g ) iff µ is trivial on F −∞ ( µ ( A ) = 0 , 1 for A ∈ F −∞ ) (c) µ is extreme in G ( g ) iff � = 0 , � � lim Λ ↑ Z sup � µ ( A ∩ B ) − µ ( A ) µ ( B ) ∀ A ∈ F B ∈F Λ − (d) Each µ ∈ G ( g ) is determined by its restriction to F −∞ (e) µ � = ν extreme in G ( g ) = ⇒ mutually singular on F −∞

  34. Introduction g -measures Gibbs measures General results General results (no hypotheses on g ) Let ◮ G ( g ) = � � µ consistent with g ◮ F −∞ := � k ∈ Z F ( −∞ ,k ] ( tail σ -algebra ) Theorem (a) G ( g ) is a convex set (b) µ is extreme in G ( g ) iff µ is trivial on F −∞ ( µ ( A ) = 0 , 1 for A ∈ F −∞ ) (c) µ is extreme in G ( g ) iff � = 0 , � � lim Λ ↑ Z sup � µ ( A ∩ B ) − µ ( A ) µ ( B ) ∀ A ∈ F B ∈F Λ − (d) Each µ ∈ G ( g ) is determined by its restriction to F −∞ (e) µ � = ν extreme in G ( g ) = ⇒ mutually singular on F −∞

  35. Introduction g -measures Gibbs measures General results General results (no hypotheses on g ) Let ◮ G ( g ) = � � µ consistent with g ◮ F −∞ := � k ∈ Z F ( −∞ ,k ] ( tail σ -algebra ) Theorem (a) G ( g ) is a convex set (b) µ is extreme in G ( g ) iff µ is trivial on F −∞ ( µ ( A ) = 0 , 1 for A ∈ F −∞ ) (c) µ is extreme in G ( g ) iff � = 0 , � � lim Λ ↑ Z sup � µ ( A ∩ B ) − µ ( A ) µ ( B ) ∀ A ∈ F B ∈F Λ − (d) Each µ ∈ G ( g ) is determined by its restriction to F −∞ (e) µ � = ν extreme in G ( g ) = ⇒ mutually singular on F −∞

  36. Introduction g -measures Gibbs measures General results General results (no hypotheses on g ) Let ◮ G ( g ) = � � µ consistent with g ◮ F −∞ := � k ∈ Z F ( −∞ ,k ] ( tail σ -algebra ) Theorem (a) G ( g ) is a convex set (b) µ is extreme in G ( g ) iff µ is trivial on F −∞ ( µ ( A ) = 0 , 1 for A ∈ F −∞ ) (c) µ is extreme in G ( g ) iff � = 0 , � � lim Λ ↑ Z sup � µ ( A ∩ B ) − µ ( A ) µ ( B ) ∀ A ∈ F B ∈F Λ − (d) Each µ ∈ G ( g ) is determined by its restriction to F −∞ (e) µ � = ν extreme in G ( g ) = ⇒ mutually singular on F −∞

  37. Introduction g -measures Gibbs measures General results General results (no hypotheses on g ) Let ◮ G ( g ) = � � µ consistent with g ◮ F −∞ := � k ∈ Z F ( −∞ ,k ] ( tail σ -algebra ) Theorem (a) G ( g ) is a convex set (b) µ is extreme in G ( g ) iff µ is trivial on F −∞ ( µ ( A ) = 0 , 1 for A ∈ F −∞ ) (c) µ is extreme in G ( g ) iff � = 0 , � � lim Λ ↑ Z sup � µ ( A ∩ B ) − µ ( A ) µ ( B ) ∀ A ∈ F B ∈F Λ − (d) Each µ ∈ G ( g ) is determined by its restriction to F −∞ (e) µ � = ν extreme in G ( g ) = ⇒ mutually singular on F −∞

  38. Introduction g -measures Gibbs measures General results General results (no hypotheses on g ) Let ◮ G ( g ) = � � µ consistent with g ◮ F −∞ := � k ∈ Z F ( −∞ ,k ] ( tail σ -algebra ) Theorem (a) G ( g ) is a convex set (b) µ is extreme in G ( g ) iff µ is trivial on F −∞ ( µ ( A ) = 0 , 1 for A ∈ F −∞ ) (c) µ is extreme in G ( g ) iff � = 0 , � � lim Λ ↑ Z sup � µ ( A ∩ B ) − µ ( A ) µ ( B ) ∀ A ∈ F B ∈F Λ − (d) Each µ ∈ G ( g ) is determined by its restriction to F −∞ (e) µ � = ν extreme in G ( g ) = ⇒ mutually singular on F −∞

  39. Introduction g -measures Gibbs measures General results Construction through limits Let P [ m,n ] be the “window transition probabilities” x n � x m − 1 � � � g [ m,n ] := m −∞ � x n − 1 � x n − 2 � x m − 1 � � � � � � � � � g x n g x n − 1 · · · g x m −∞ −∞ −∞ Theorem If µ is extreme on G ( g ) , then for µ -almost all y ∈ A Z , � y − ℓ − 1 x n � { x n � � � � g [ − ℓ,ℓ ] − ℓ →∞ µ − − → m } m −∞ m ∈ A [ m,n ] (no hypotheses on g ) for all x n

  40. Introduction g -measures Gibbs measures General results Construction through limits Let P [ m,n ] be the “window transition probabilities” x n � x m − 1 � � � g [ m,n ] := m −∞ � x n − 1 � x n − 2 � x m − 1 � � � � � � � � � g x n g x n − 1 · · · g x m −∞ −∞ −∞ Theorem If µ is extreme on G ( g ) , then for µ -almost all y ∈ A Z , � y − ℓ − 1 x n � { x n � � � � g [ − ℓ,ℓ ] − ℓ →∞ µ − − → m } m −∞ m ∈ A [ m,n ] (no hypotheses on g ) for all x n

  41. Introduction g -measures Gibbs measures General results Regular g -measures Definition A measure µ on A Z is regular (continuous) if it is consistent with regular transition probabilities Theorem (Palmer, Parry and Walters (1977)) � ω − 1 � � � µ is a regular g -measure if and only if the sequence µ ω 0 − n converges uniformly in ω as n → ∞ Theorem � y − ℓ j − 1 � � � If g is regular (continuous), then every lim j g [ ℓ j , − ℓ j ] · −∞ defines a g -measure.

  42. Introduction g -measures Gibbs measures General results Regular g -measures Definition A measure µ on A Z is regular (continuous) if it is consistent with regular transition probabilities Theorem (Palmer, Parry and Walters (1977)) � ω − 1 � � � µ is a regular g -measure if and only if the sequence µ ω 0 − n converges uniformly in ω as n → ∞ Theorem � y − ℓ j − 1 � � � If g is regular (continuous), then every lim j g [ ℓ j , − ℓ j ] · −∞ defines a g -measure.

  43. Introduction g -measures Gibbs measures General results Regular g -measures Definition A measure µ on A Z is regular (continuous) if it is consistent with regular transition probabilities Theorem (Palmer, Parry and Walters (1977)) � ω − 1 � � � µ is a regular g -measure if and only if the sequence µ ω 0 − n converges uniformly in ω as n → ∞ Theorem � y − ℓ j − 1 � � � If g is regular (continuous), then every lim j g [ ℓ j , − ℓ j ] · −∞ defines a g -measure.

  44. Introduction g -measures Gibbs measures Uniqueness Continuity rates Uniqueness conditions: continuity and non-nulness hypotheses ◮ The continuity rate of g : � �� � x − 1 � x − k − 1 y − k − 1 � � � � � var k ( g ) := sup � g x 0 − g x 0 � � −∞ −∞ � x,y ◮ The log-continuity rate of g : � x − 1 � � � g x 0 −∞ var k (log g ) := sup x,y log � x − k − 1 y − k − 1 � � � g x 0 −∞ ◮ The ∆ -rate of g : � �� � � x − 1 � x − k − 1 y − k − 1 � � � � � ∆ k ( g ) := inf g x 0 ∧ g x 0 −∞ −∞ x,y x 0

  45. Introduction g -measures Gibbs measures Uniqueness Continuity rates Uniqueness conditions: continuity and non-nulness hypotheses ◮ The continuity rate of g : � �� � x − 1 � x − k − 1 y − k − 1 � � � � � var k ( g ) := sup � g x 0 − g x 0 � � −∞ −∞ � x,y ◮ The log-continuity rate of g : � x − 1 � � � g x 0 −∞ var k (log g ) := sup x,y log � x − k − 1 y − k − 1 � � � g x 0 −∞ ◮ The ∆ -rate of g : � �� � � x − 1 � x − k − 1 y − k − 1 � � � � � ∆ k ( g ) := inf g x 0 ∧ g x 0 −∞ −∞ x,y x 0

  46. Introduction g -measures Gibbs measures Uniqueness Continuity rates Uniqueness conditions: continuity and non-nulness hypotheses ◮ The continuity rate of g : � �� � x − 1 � x − k − 1 y − k − 1 � � � � � var k ( g ) := sup � g x 0 − g x 0 � � −∞ −∞ � x,y ◮ The log-continuity rate of g : � x − 1 � � � g x 0 −∞ var k (log g ) := sup x,y log � x − k − 1 y − k − 1 � � � g x 0 −∞ ◮ The ∆ -rate of g : � �� � � x − 1 � x − k − 1 y − k − 1 � � � � � ∆ k ( g ) := inf g x 0 ∧ g x 0 −∞ −∞ x,y x 0

  47. Introduction g -measures Gibbs measures Uniqueness Continuity rates Uniqueness conditions: continuity and non-nulness hypotheses ◮ The continuity rate of g : � �� � x − 1 � x − k − 1 y − k − 1 � � � � � var k ( g ) := sup � g x 0 − g x 0 � � −∞ −∞ � x,y ◮ The log-continuity rate of g : � x − 1 � � � g x 0 −∞ var k (log g ) := sup x,y log � x − k − 1 y − k − 1 � � � g x 0 −∞ ◮ The ∆ -rate of g : � �� � � x − 1 � x − k − 1 y − k − 1 � � � � � ∆ k ( g ) := inf g x 0 ∧ g x 0 −∞ −∞ x,y x 0

  48. Introduction g -measures Gibbs measures Uniqueness Non-nullness hypotheses ◮ g is weakly non-null if � � y − 1 � � � inf y g x 0 > 0 −∞ x 0 ◮ g is (strongly) non-null if � y − 1 � � � x 0 ,y g inf x 0 > 0 −∞ [Doeblin-Fortet: ◮ Chain of type A : for g continuous and weakly non-null ◮ Chain of type B : for g log-continuous and non-null]

  49. Introduction g -measures Gibbs measures Uniqueness Non-nullness hypotheses ◮ g is weakly non-null if � � y − 1 � � � inf y g x 0 > 0 −∞ x 0 ◮ g is (strongly) non-null if � y − 1 � � � x 0 ,y g inf x 0 > 0 −∞ [Doeblin-Fortet: ◮ Chain of type A : for g continuous and weakly non-null ◮ Chain of type B : for g log-continuous and non-null]

  50. Introduction g -measures Gibbs measures Uniqueness Non-nullness hypotheses ◮ g is weakly non-null if � � y − 1 � � � inf y g x 0 > 0 −∞ x 0 ◮ g is (strongly) non-null if � y − 1 � � � x 0 ,y g inf x 0 > 0 −∞ [Doeblin-Fortet: ◮ Chain of type A : for g continuous and weakly non-null ◮ Chain of type B : for g log-continuous and non-null]

  51. Introduction g -measures Gibbs measures Criteria Uniqueness criteria (selected) ◮ Doeblin-Fortet (1937 + Iosifescu, 1992): g non-null and � var k ( g ) < ∞ k ◮ Harris (1955): g weakly non-null and n 1 − | E | � � � � 2 var k ( g ) = + ∞ n ≥ 1 k =1 ◮ Berbee (1987): g non-null and n � � � � exp − var k (log g ) = + ∞ n ≥ 1 k =1

  52. Introduction g -measures Gibbs measures Criteria Uniqueness criteria (selected) ◮ Doeblin-Fortet (1937 + Iosifescu, 1992): g non-null and � var k ( g ) < ∞ k ◮ Harris (1955): g weakly non-null and n 1 − | E | � � � � 2 var k ( g ) = + ∞ n ≥ 1 k =1 ◮ Berbee (1987): g non-null and n � � � � exp − var k (log g ) = + ∞ n ≥ 1 k =1

  53. Introduction g -measures Gibbs measures Criteria Uniqueness criteria (selected) ◮ Doeblin-Fortet (1937 + Iosifescu, 1992): g non-null and � var k ( g ) < ∞ k ◮ Harris (1955): g weakly non-null and n 1 − | E | � � � � 2 var k ( g ) = + ∞ n ≥ 1 k =1 ◮ Berbee (1987): g non-null and n � � � � exp − var k (log g ) = + ∞ n ≥ 1 k =1

  54. Introduction g -measures Gibbs measures Criteria Uniqueness criteria (cont.) ◮ Stenflo (2003): g non-null and n � � ∆ k ( g ) = + ∞ , n ≥ 1 k =1 ◮ Johansson and ¨ Oberg (2002): g non-null and � var 2 k (log g ) < + ∞ k ≥ 1

  55. Introduction g -measures Gibbs measures Criteria Uniqueness criteria (cont.) ◮ Stenflo (2003): g non-null and n � � ∆ k ( g ) = + ∞ , n ≥ 1 k =1 ◮ Johansson and ¨ Oberg (2002): g non-null and � var 2 k (log g ) < + ∞ k ≥ 1

  56. Introduction g -measures Gibbs measures Criteria Uniqueness criteria (cont.) ◮ Stenflo (2003): g non-null and n � � ∆ k ( g ) = + ∞ , n ≥ 1 k =1 ◮ Johansson and ¨ Oberg (2002): g non-null and � var 2 k (log g ) < + ∞ k ≥ 1

  57. Introduction g -measures Gibbs measures Criteria Comments Leaving non-nullness aside, criteria are not fully comparable Rough comparison: ◮ Doeblin-Fortet: var k ∼ 1 /k 1+ δ ◮ Harris–Stenflo: var k ∼ 1 /k ◮ Johansson-¨ Oberg: var k ∼ 1 /k 1 / 2+ δ

  58. Introduction g -measures Gibbs measures Criteria Comments Leaving non-nullness aside, criteria are not fully comparable Rough comparison: ◮ Doeblin-Fortet: var k ∼ 1 /k 1+ δ ◮ Harris–Stenflo: var k ∼ 1 /k ◮ Johansson-¨ Oberg: var k ∼ 1 /k 1 / 2+ δ

  59. Introduction g -measures Gibbs measures Criteria Criterion of a different species Let � �� � x − 1 � y − 1 � � � � � osc j ( g ) := sup � g x 0 − g x 0 � � −∞ −∞ � x = y off j Then (F-Maillard, 2005) there is a unique consistent chain if � δ j ( g ) < 1 j< 0 ◮ One-sided version of Dobrushin condition in stat. mech. ◮ This criterion is not comparable with precedent ones ◮ In particular no non-nullness requirement!

  60. Introduction g -measures Gibbs measures Criteria Criterion of a different species Let � �� � x − 1 � y − 1 � � � � � osc j ( g ) := sup � g x 0 − g x 0 � � −∞ −∞ � x = y off j Then (F-Maillard, 2005) there is a unique consistent chain if � δ j ( g ) < 1 j< 0 ◮ One-sided version of Dobrushin condition in stat. mech. ◮ This criterion is not comparable with precedent ones ◮ In particular no non-nullness requirement!

  61. Introduction g -measures Gibbs measures Criteria Criterion of a different species Let � �� � x − 1 � y − 1 � � � � � osc j ( g ) := sup � g x 0 − g x 0 � � −∞ −∞ � x = y off j Then (F-Maillard, 2005) there is a unique consistent chain if � δ j ( g ) < 1 j< 0 ◮ One-sided version of Dobrushin condition in stat. mech. ◮ This criterion is not comparable with precedent ones ◮ In particular no non-nullness requirement!

  62. Introduction g -measures Gibbs measures Criteria Criterion of a different species Let � �� � x − 1 � y − 1 � � � � � osc j ( g ) := sup � g x 0 − g x 0 � � −∞ −∞ � x = y off j Then (F-Maillard, 2005) there is a unique consistent chain if � δ j ( g ) < 1 j< 0 ◮ One-sided version of Dobrushin condition in stat. mech. ◮ This criterion is not comparable with precedent ones ◮ In particular no non-nullness requirement!

  63. Introduction g -measures Gibbs measures Criteria Criterion of a different species Let � �� � x − 1 � y − 1 � � � � � osc j ( g ) := sup � g x 0 − g x 0 � � −∞ −∞ � x = y off j Then (F-Maillard, 2005) there is a unique consistent chain if � δ j ( g ) < 1 j< 0 ◮ One-sided version of Dobrushin condition in stat. mech. ◮ This criterion is not comparable with precedent ones ◮ In particular no non-nullness requirement!

  64. Introduction g -measures Gibbs measures Non-uniqueness Examples of non-uniqueness ◮ First example: Bramson and Kalikow (1993): var k ( g ) ≥ C/ log | k | ◮ Berger, Hoffman and Sidoravicius (1993): Johansson-¨ Oberg criterion is sharp: For all ε > 0 there exists g with � var 2+ ǫ ( g ) < ∞ and |G ( P ) | > 1 k k< 0 ◮ Hulse (2006): One-sided Dobrushin criterion is sharp: For all ε > 0 there exists g with � osc k ( g ) = 1 + ǫ and |G ( P ) | > 1 k< 0

  65. Introduction g -measures Gibbs measures Non-uniqueness Examples of non-uniqueness ◮ First example: Bramson and Kalikow (1993): var k ( g ) ≥ C/ log | k | ◮ Berger, Hoffman and Sidoravicius (1993): Johansson-¨ Oberg criterion is sharp: For all ε > 0 there exists g with � var 2+ ǫ ( g ) < ∞ and |G ( P ) | > 1 k k< 0 ◮ Hulse (2006): One-sided Dobrushin criterion is sharp: For all ε > 0 there exists g with � osc k ( g ) = 1 + ǫ and |G ( P ) | > 1 k< 0

  66. Introduction g -measures Gibbs measures Non-uniqueness Examples of non-uniqueness ◮ First example: Bramson and Kalikow (1993): var k ( g ) ≥ C/ log | k | ◮ Berger, Hoffman and Sidoravicius (1993): Johansson-¨ Oberg criterion is sharp: For all ε > 0 there exists g with � var 2+ ǫ ( g ) < ∞ and |G ( P ) | > 1 k k< 0 ◮ Hulse (2006): One-sided Dobrushin criterion is sharp: For all ε > 0 there exists g with � osc k ( g ) = 1 + ǫ and |G ( P ) | > 1 k< 0

  67. Introduction g -measures Gibbs measures History Gibbs measures: Historic highlights Prehistory: ◮ Boltzmann, Maxwell (kinetic theory): Probability weights ◮ Gibbs: Geometry of phase diagrams History: ◮ Dobrushin (1968), Lanford and Ruelle (1969): Conditional expectations ◮ Preston (1973): Specifications ◮ Kozlov (1974), Sullivan (1973): Quasilocality and Gibbsianness

  68. Introduction g -measures Gibbs measures History Gibbs measures: Historic highlights Prehistory: ◮ Boltzmann, Maxwell (kinetic theory): Probability weights ◮ Gibbs: Geometry of phase diagrams History: ◮ Dobrushin (1968), Lanford and Ruelle (1969): Conditional expectations ◮ Preston (1973): Specifications ◮ Kozlov (1974), Sullivan (1973): Quasilocality and Gibbsianness

  69. Introduction g -measures Gibbs measures Statistical mechanics motivation Equilibrium Issue: Given microscopic behavior in finite regions, determine the macroscopic behavior Basic tenets: (i) Equilibrium = probability measure (ii) Finite regions = finite parts of an infinite system (iii) Exterior of a finite region = frozen external condition (iv) Macroscopic behavior = limit of infinite regions

  70. Introduction g -measures Gibbs measures Statistical mechanics motivation Equilibrium Issue: Given microscopic behavior in finite regions, determine the macroscopic behavior Basic tenets: (i) Equilibrium = probability measure (ii) Finite regions = finite parts of an infinite system (iii) Exterior of a finite region = frozen external condition (iv) Macroscopic behavior = limit of infinite regions

  71. Introduction g -measures Gibbs measures Statistical mechanics motivation Equilibrium Issue: Given microscopic behavior in finite regions, determine the macroscopic behavior Basic tenets: (i) Equilibrium = probability measure (ii) Finite regions = finite parts of an infinite system (iii) Exterior of a finite region = frozen external condition (iv) Macroscopic behavior = limit of infinite regions

  72. Introduction g -measures Gibbs measures Statistical mechanics motivation Equilibrium = Probability kernels Set up: Product space Ω = A L System in Λ ⋐ L described by a probability kernel γ Λ ( · | · ) γ Λ ( f | ω ) = equilibrium value of f when the configuration outside Λ is ω Equilibrium in Λ = Equilibrium in every Λ ′ ⊂ Λ. Equilibrium value of f in Λ = expectations in Λ ′ with Λ \ Λ ′ distributed according to the Λ-equilibrium � � � (Λ ′ ⊂ Λ ⋐ L ) γ Λ ( f | ω ) = γ Λ γ Λ ′ ( f | · ) � ω �

  73. Introduction g -measures Gibbs measures Statistical mechanics motivation Equilibrium = Probability kernels Set up: Product space Ω = A L System in Λ ⋐ L described by a probability kernel γ Λ ( · | · ) γ Λ ( f | ω ) = equilibrium value of f when the configuration outside Λ is ω Equilibrium in Λ = Equilibrium in every Λ ′ ⊂ Λ. Equilibrium value of f in Λ = expectations in Λ ′ with Λ \ Λ ′ distributed according to the Λ-equilibrium � � � (Λ ′ ⊂ Λ ⋐ L ) γ Λ ( f | ω ) = γ Λ γ Λ ′ ( f | · ) � ω �

  74. Introduction g -measures Gibbs measures Statistical mechanics motivation Equilibrium = Probability kernels Set up: Product space Ω = A L System in Λ ⋐ L described by a probability kernel γ Λ ( · | · ) γ Λ ( f | ω ) = equilibrium value of f when the configuration outside Λ is ω Equilibrium in Λ = Equilibrium in every Λ ′ ⊂ Λ. Equilibrium value of f in Λ = expectations in Λ ′ with Λ \ Λ ′ distributed according to the Λ-equilibrium � � � (Λ ′ ⊂ Λ ⋐ L ) γ Λ ( f | ω ) = γ Λ γ Λ ′ ( f | · ) � ω �

  75. Introduction g -measures Gibbs measures Statistical mechanics motivation Specifications Definition A specification is a family γ = { γ Λ : Λ ⋐ L } of probability kernels γ Λ : F × Ω − → [0 , 1] such that (i) External dependence: γ Λ ( f | · ) is F Λ c -measurable (ii) Frozen external conditions: Each γ Λ is proper , γ Λ ( h f | ω ) = h ( ω ) γ Λ ( f | ω ) if h depends only on ω Λ c (iii) Equilibrium in finite regions: The family γ is consistent γ ∆ γ Λ = γ ∆ if ∆ ⊃ Λ

  76. Introduction g -measures Gibbs measures Statistical mechanics motivation Specifications Definition A specification is a family γ = { γ Λ : Λ ⋐ L } of probability kernels γ Λ : F × Ω − → [0 , 1] such that (i) External dependence: γ Λ ( f | · ) is F Λ c -measurable (ii) Frozen external conditions: Each γ Λ is proper , γ Λ ( h f | ω ) = h ( ω ) γ Λ ( f | ω ) if h depends only on ω Λ c (iii) Equilibrium in finite regions: The family γ is consistent γ ∆ γ Λ = γ ∆ if ∆ ⊃ Λ

  77. Introduction g -measures Gibbs measures Statistical mechanics motivation Specifications Definition A specification is a family γ = { γ Λ : Λ ⋐ L } of probability kernels γ Λ : F × Ω − → [0 , 1] such that (i) External dependence: γ Λ ( f | · ) is F Λ c -measurable (ii) Frozen external conditions: Each γ Λ is proper , γ Λ ( h f | ω ) = h ( ω ) γ Λ ( f | ω ) if h depends only on ω Λ c (iii) Equilibrium in finite regions: The family γ is consistent γ ∆ γ Λ = γ ∆ if ∆ ⊃ Λ

  78. Introduction g -measures Gibbs measures Statistical mechanics motivation Consistency Definition A probability measure µ on Ω is consistent with γ if µ γ Λ = µ for each Λ ⋐ L (DLR equations = equilibrium in infinite regions) Remarks ◮ Several consistent measures = first-order phase transition ◮ Specification ∼ system of regular conditional probabilities ◮ Difference: no apriori measure, hence conditions required for all ω rather than almost surely ◮ Stat. mech.: conditional probabilities − → measures

  79. Introduction g -measures Gibbs measures Statistical mechanics motivation Consistency Definition A probability measure µ on Ω is consistent with γ if µ γ Λ = µ for each Λ ⋐ L (DLR equations = equilibrium in infinite regions) Remarks ◮ Several consistent measures = first-order phase transition ◮ Specification ∼ system of regular conditional probabilities ◮ Difference: no apriori measure, hence conditions required for all ω rather than almost surely ◮ Stat. mech.: conditional probabilities − → measures

  80. Introduction g -measures Gibbs measures Statistical mechanics motivation Consistency Definition A probability measure µ on Ω is consistent with γ if µ γ Λ = µ for each Λ ⋐ L (DLR equations = equilibrium in infinite regions) Remarks ◮ Several consistent measures = first-order phase transition ◮ Specification ∼ system of regular conditional probabilities ◮ Difference: no apriori measure, hence conditions required for all ω rather than almost surely ◮ Stat. mech.: conditional probabilities − → measures

  81. Introduction g -measures Gibbs measures Statistical mechanics motivation Consistency Definition A probability measure µ on Ω is consistent with γ if µ γ Λ = µ for each Λ ⋐ L (DLR equations = equilibrium in infinite regions) Remarks ◮ Several consistent measures = first-order phase transition ◮ Specification ∼ system of regular conditional probabilities ◮ Difference: no apriori measure, hence conditions required for all ω rather than almost surely ◮ Stat. mech.: conditional probabilities − → measures

  82. Introduction g -measures Gibbs measures General results General results (no hypotheses on γ ) Let ◮ G ( γ ) = � � µ consistent with γ ◮ F ∞ := � Λ ⋐ L F Λ c ( σ -algebra at infinity ) Theorem (a) G ( γ ) is a convex set (b) µ is extreme in G ( γ ) iff µ is trivial on F ∞ ( µ ( A ) = 0 , 1 for A ∈ F ∞ ) (c) µ is extreme in G ( γ ) iff � = 0 , � � lim Λ ↑ Z sup � µ ( A ∩ B ) − µ ( A ) µ ( B ) ∀ A ∈ F B ∈F Λ − (d) Each µ ∈ G ( γ ) is determined by its restriction to F ∞ (e) µ � = ν extreme in G ( γ ) = ⇒ mutually singular on F ∞

  83. Introduction g -measures Gibbs measures General results General results (no hypotheses on γ ) Let ◮ G ( γ ) = � � µ consistent with γ ◮ F ∞ := � Λ ⋐ L F Λ c ( σ -algebra at infinity ) Theorem (a) G ( γ ) is a convex set (b) µ is extreme in G ( γ ) iff µ is trivial on F ∞ ( µ ( A ) = 0 , 1 for A ∈ F ∞ ) (c) µ is extreme in G ( γ ) iff � = 0 , � � lim Λ ↑ Z sup � µ ( A ∩ B ) − µ ( A ) µ ( B ) ∀ A ∈ F B ∈F Λ − (d) Each µ ∈ G ( γ ) is determined by its restriction to F ∞ (e) µ � = ν extreme in G ( γ ) = ⇒ mutually singular on F ∞

  84. Introduction g -measures Gibbs measures General results General results (no hypotheses on γ ) Let ◮ G ( γ ) = � � µ consistent with γ ◮ F ∞ := � Λ ⋐ L F Λ c ( σ -algebra at infinity ) Theorem (a) G ( γ ) is a convex set (b) µ is extreme in G ( γ ) iff µ is trivial on F ∞ ( µ ( A ) = 0 , 1 for A ∈ F ∞ ) (c) µ is extreme in G ( γ ) iff � = 0 , � � lim Λ ↑ Z sup � µ ( A ∩ B ) − µ ( A ) µ ( B ) ∀ A ∈ F B ∈F Λ − (d) Each µ ∈ G ( γ ) is determined by its restriction to F ∞ (e) µ � = ν extreme in G ( γ ) = ⇒ mutually singular on F ∞

Recommend


More recommend