Universally Typical Sets for Ergodic Sources of Multidimensional - - PowerPoint PPT Presentation

universally typical sets for ergodic sources of
SMART_READER_LITE
LIVE PREVIEW

Universally Typical Sets for Ergodic Sources of Multidimensional - - PowerPoint PPT Presentation

Universally Typical Sets for Ergodic Sources of Multidimensional Data Tyll Kr uger, Guido Montufar, Ruedi Seiler and Rainer Siegmund-Schultze http://arxiv.org/abs/1105.0393 Universal Lossless Encoding Algorithms data modeled by


slide-1
SLIDE 1

Universally Typical Sets for Ergodic Sources of Multidimensional Data

Tyll Kr¨ uger, Guido Montufar, Ruedi Seiler and Rainer Siegmund-Schultze

http://arxiv.org/abs/1105.0393

slide-2
SLIDE 2

Universal Lossless Encoding Algorithms

  • data modeled by stationary/ergodic random process
slide-3
SLIDE 3

Universal Lossless Encoding Algorithms

  • data modeled by stationary/ergodic random process
  • lossless: algorithm ensurs exact reconstruction
slide-4
SLIDE 4

Universal Lossless Encoding Algorithms

  • data modeled by stationary/ergodic random process
  • lossless: algorithm ensurs exact reconstruction
  • main idea (Shannon): encode typical but small set
slide-5
SLIDE 5

Universal Lossless Encoding Algorithms

  • data modeled by stationary/ergodic random process
  • lossless: algorithm ensurs exact reconstruction
  • main idea (Shannon): encode typical but small set
  • universal: algorithm does not involve specific properties of

random process.

slide-6
SLIDE 6

Universal Lossless Encoding Algorithms

  • data modeled by stationary/ergodic random process
  • lossless: algorithm ensurs exact reconstruction
  • main idea (Shannon): encode typical but small set
  • universal: algorithm does not involve specific properties of

random process.

  • main idea: construction of universally typical sets.
slide-7
SLIDE 7

Entropy Typical Set

(xn

1)

with − 1 n log µ(xn

1) ∼ h(µ)

. have the Asymptotic Equipartition Property:

  • all (xn

1) have the same probability e−nh(µ)

slide-8
SLIDE 8

Entropy Typical Set

(xn

1)

with − 1 n log µ(xn

1) ∼ h(µ)

. have the Asymptotic Equipartition Property:

  • all (xn

1) have the same probability e−nh(µ)

  • small size enh(µ) but still
slide-9
SLIDE 9

Entropy Typical Set

(xn

1)

with − 1 n log µ(xn

1) ∼ h(µ)

. have the Asymptotic Equipartition Property:

  • all (xn

1) have the same probability e−nh(µ)

  • small size enh(µ) but still
  • nearly full measure
slide-10
SLIDE 10

Entropy Typical Set

(xn

1)

with − 1 n log µ(xn

1) ∼ h(µ)

. have the Asymptotic Equipartition Property:

  • all (xn

1) have the same probability e−nh(µ)

  • small size enh(µ) but still
  • nearly full measure
  • output sequences with higher or smaler probability than

e−nh(µ) will rarely be observed.

slide-11
SLIDE 11

Shannon-Mcmillan-Briman

Z -ergodic processes:

  • − 1

n log µ(xn 1) → h(µ).

  • in probability (Shannon)
slide-12
SLIDE 12

Shannon-Mcmillan-Briman

Z -ergodic processes:

  • − 1

n log µ(xn 1) → h(µ).

  • in probability (Shannon)
  • pointwise almost surely (Mcmillan, Briman)
slide-13
SLIDE 13

Shannon-Mcmillan-Briman

Z -ergodic processes:

  • − 1

n log µ(xn 1) → h(µ).

  • in probability (Shannon)
  • pointwise almost surely (Mcmillan, Briman)
  • amenable groups, Zd (Kiefer, Ornstein and Weiss)
slide-14
SLIDE 14

Notation

d-dimensional:

  • Λn :=
  • (i1, . . . , id) ∈ Zd

+ : 0 ≤ ij ≤ n − 1, j ∈ {1, . . . , d}

slide-15
SLIDE 15

Notation

d-dimensional:

  • Λn :=
  • (i1, . . . , id) ∈ Zd

+ : 0 ≤ ij ≤ n − 1, j ∈ {1, . . . , d}

  • Σn := AΛn, Σ = AZd, A finite alphabet
slide-16
SLIDE 16

Result

Theorem (Universally-typical-sets)

For any given h0 with 0 < h0 ≤ log(|A|) one can construct a sequence of subsets {Tn(h0) ⊂ Σn}n such that for all µ ∈ Perg with h(µ) < h0 the following holds: 1. lim

n→∞ µn (Tn(h0)) = 1,

slide-17
SLIDE 17

Result

Theorem (Universally-typical-sets)

For any given h0 with 0 < h0 ≤ log(|A|) one can construct a sequence of subsets {Tn(h0) ⊂ Σn}n such that for all µ ∈ Perg with h(µ) < h0 the following holds: 1. lim

n→∞ µn (Tn(h0)) = 1,

2. lim

n→∞ log |Tn(h0)| nd

= h0.

slide-18
SLIDE 18

Result

Theorem (Universally-typical-sets)

For any given h0 with 0 < h0 ≤ log(|A|) one can construct a sequence of subsets {Tn(h0) ⊂ Σn}n such that for all µ ∈ Perg with h(µ) < h0 the following holds: 1. lim

n→∞ µn (Tn(h0)) = 1,

2. lim

n→∞ log |Tn(h0)| nd

= h0.

  • 3. optimal
slide-19
SLIDE 19

Remarks about Proof:

  • for each x ∈ Σ empirical measures
  • ˜

µk,n

x

  • k≤n on AΛk.

Explain empirical measure

  • ˜

µk,n

x

  • by a drawing (black bord)
slide-20
SLIDE 20

Remarks about Proof:

  • for each x ∈ Σ empirical measures
  • ˜

µk,n

x

  • k≤n on AΛk.
  • Tn(h0) := Πn{x ∈ Σ :

1 kd H(˜

µk,n

x ) ≤ h0}

Explain empirical measure

  • ˜

µk,n

x

  • by a drawing (black bord)
slide-21
SLIDE 21

Remarks about Proof:

  • for each x ∈ Σ empirical measures
  • ˜

µk,n

x

  • k≤n on AΛk.
  • Tn(h0) := Πn{x ∈ Σ :

1 kd H(˜

µk,n

x ) ≤ h0}

  • kd ≤

1 1+ǫ log|A| nd,

ǫ > 0 Explain empirical measure

  • ˜

µk,n

x

  • by a drawing (black bord)
slide-22
SLIDE 22

Remarks about Proof:

  • for each x ∈ Σ empirical measures
  • ˜

µk,n

x

  • k≤n on AΛk.
  • Tn(h0) := Πn{x ∈ Σ :

1 kd H(˜

µk,n

x ) ≤ h0}

  • kd ≤

1 1+ǫ log|A| nd,

ǫ > 0

  • lim sup log |Tn(h0)|

nd

≤ h0 Explain empirical measure

  • ˜

µk,n

x

  • by a drawing (black bord)
slide-23
SLIDE 23

Theorem (Empirical-Entropy Theorem)

Let µ ∈ Perg. For any sequence {kn} satisfying kn

n→∞

− − − → ∞ and kd

n = o(nd) we have

lim

n→∞

1 kd

n

H(˜ µkn,n

x

) = h(µ) , µ-almost surely .

slide-24
SLIDE 24

Main references

  • Paul C. Shields: The Ergodic Theory of Discrete Sample

Paths, Graduate Studies in Mathematics, Vol.13 AMS.

  • Article to appear in KYBERNETICA
slide-25
SLIDE 25

Background Material

Lemma (Packing Lemma)

Consider for any fixed 0 < δ ≤ 1 integers k and m related through k ≥ d · m/δ. Let C ⊂ Σm and x ∈ Σ with the property that ˜ µm,k

x,overl(C) ≥ 1 − δ. Then, there exists p ∈ Λm such that:

a) ˜ µp,m,k

x

(C) ≥ 1 − 2δ, and also b) k

m

d ˜ µp,m,k

x

(C) ≥ (1 − 4)δ( k

m

  • + 2)d.
slide-26
SLIDE 26

Theorem

Given any µ ∈ Perg and any α ∈ (0, 1

2) we have the following:

  • For all k larger than some k0 = k0(α) there is a set

Tk(α) ⊂ Σk satisfying log |Tk(α)| kd ≤ h(µ) + α , and such that for µ-a.e. x the following holds: ˜ µk,n

x

(Tk(α)) > 1 − α , for all n and k such that k

n < ε for some ε = ε(α) > 0 and

n larger than some n0(x).

  • (optimality)
slide-27
SLIDE 27

Definition (Entropy-typical-sets)

Let δ < 1

  • 2. For some µ with entropy rate h(µ) the

entropy-typical-sets are defined as: Cm(δ) :=

  • x ∈ Σm : 2−md(h(µ)+δ) ≤ µm({x}) ≤ 2−md(h(µ)−δ)

. (1) We will use these sets as basic sets for the typical-sampling-sets defined below.

slide-28
SLIDE 28

Definition (Typical-sampling-sets)

Consider some µ. Consider some δ < 1

  • 2. For k ≥ m, we define a

typical-sampling-set Tk(δ, m) as the set of elements in Σk that have a regular m-block partition such that the resulting words belonging to the µ-entropy-typical-set Cm = Cm(δ) contribute at least a (1 − δ)-fraction to the (slightly modified) number of partition elements in that regular m-block partition, more precisely, they occupy at least a (1 − δ)-fraction of all sites in Λk Tk(δ, m) :=

  • x ∈ Σk :
  • r∈m·Zd:

(Λm+r+p)∩Λk=∅

1[Cm](σr+px) ≥ (1−δ) k m d for some p