sampling discretization of integral norms lecture 3
play

Sampling discretization of integral norms. Lecture 3 Vladimir - PowerPoint PPT Presentation

Sampling discretization of integral norms. Lecture 3 Vladimir Temlyakov Chemnitz; September, 2019 Vladimir Temlyakov Sampling discretization of integral norms. Lecture 3 Sampling discretization with absolute error Let W L q ( , ), 1


  1. Sampling discretization of integral norms. Lecture 3 Vladimir Temlyakov Chemnitz; September, 2019 Vladimir Temlyakov Sampling discretization of integral norms. Lecture 3

  2. Sampling discretization with absolute error Let W ⊂ L q (Ω , µ ), 1 ≤ q < ∞ , be a class of continuous on Ω functions. We are interested in estimating the following optimal errors of discretization of the L q norm of functions from W Vladimir Temlyakov Sampling discretization of integral norms. Lecture 3

  3. Sampling discretization with absolute error Let W ⊂ L q (Ω , µ ), 1 ≤ q < ∞ , be a class of continuous on Ω functions. We are interested in estimating the following optimal errors of discretization of the L q norm of functions from W � � m � � q − 1 � f � q � | f ( ξ j ) | q � � er m ( W , L q ) := ξ 1 ,...,ξ m sup inf , � � m f ∈ W � � j =1 � � Vladimir Temlyakov Sampling discretization of integral norms. Lecture 3

  4. Sampling discretization with absolute error Let W ⊂ L q (Ω , µ ), 1 ≤ q < ∞ , be a class of continuous on Ω functions. We are interested in estimating the following optimal errors of discretization of the L q norm of functions from W � � m � � q − 1 � f � q � | f ( ξ j ) | q � � er m ( W , L q ) := ξ 1 ,...,ξ m sup inf , � � m f ∈ W � � j =1 � � � � m � � � er o � � f � q λ j | f ( ξ j ) | q � m ( W , L q ) := inf sup q − . � � ξ 1 ,...,ξ m ; λ 1 ,...,λ m � � f ∈ W j =1 � � Vladimir Temlyakov Sampling discretization of integral norms. Lecture 3

  5. General theorem Theorem (T1; VT, 2018) Assume that a class of real functions W is such that for all f ∈ W we have � f � ∞ ≤ M with some constant M. Also assume that the entropy numbers of W in the uniform norm L ∞ satisfy the condition ε n ( W , L ∞ ) ≤ Cn − r , r ∈ (0 , 1 / 2) . Then er m ( W ) := er m ( W , L 2 ) ≤ Km − r . Vladimir Temlyakov Sampling discretization of integral norms. Lecture 3

  6. Comments Theorem T1 is a rather general theorem, which connects the behavior of absolute errors of discretization with the rate of decay of the entropy numbers. This theorem is derived from known results in supervised learning theory. It is well understood in learning theory that the entropy numbers of the class of priors (regression functions) is the right characteristic in studying the regression problem. Vladimir Temlyakov Sampling discretization of integral norms. Lecture 3

  7. Comments Theorem T1 is a rather general theorem, which connects the behavior of absolute errors of discretization with the rate of decay of the entropy numbers. This theorem is derived from known results in supervised learning theory. It is well understood in learning theory that the entropy numbers of the class of priors (regression functions) is the right characteristic in studying the regression problem. We impose a restriction r < 1 / 2 in Theorem T1 because the probabilistic technique from the supervised learning theory has a natural limitation to r ≤ 1 / 2. Vladimir Temlyakov Sampling discretization of integral norms. Lecture 3

  8. Comments Theorem T1 is a rather general theorem, which connects the behavior of absolute errors of discretization with the rate of decay of the entropy numbers. This theorem is derived from known results in supervised learning theory. It is well understood in learning theory that the entropy numbers of the class of priors (regression functions) is the right characteristic in studying the regression problem. We impose a restriction r < 1 / 2 in Theorem T1 because the probabilistic technique from the supervised learning theory has a natural limitation to r ≤ 1 / 2. It would be interesting to understand if Theorem T1 holds for r ≥ 1 / 2. Vladimir Temlyakov Sampling discretization of integral norms. Lecture 3

  9. Comments Theorem T1 is a rather general theorem, which connects the behavior of absolute errors of discretization with the rate of decay of the entropy numbers. This theorem is derived from known results in supervised learning theory. It is well understood in learning theory that the entropy numbers of the class of priors (regression functions) is the right characteristic in studying the regression problem. We impose a restriction r < 1 / 2 in Theorem T1 because the probabilistic technique from the supervised learning theory has a natural limitation to r ≤ 1 / 2. It would be interesting to understand if Theorem T1 holds for r ≥ 1 / 2. Also, it would be interesting to obtain an analog of Theorem T1 for discretization in the L q , 1 ≤ q < ∞ , norm. Vladimir Temlyakov Sampling discretization of integral norms. Lecture 3

  10. Smoothness classes For classes of smooth functions we obtained error bounds, which do not have a restriction on smoothness r . We proved the following bounds for the class W r 2 of functions on d variables with bounded in L 2 mixed derivative. Vladimir Temlyakov Sampling discretization of integral norms. Lecture 3

  11. Smoothness classes For classes of smooth functions we obtained error bounds, which do not have a restriction on smoothness r . We proved the following bounds for the class W r 2 of functions on d variables with bounded in L 2 mixed derivative. Theorem (T2; VT, 2018) Let r > 1 / 2 and µ be the Lebesgue measure on [0 , 2 π ] d . Then 2 , L 2 ) ≍ m − r (log m ) ( d − 1) / 2 . er o m ( W r Vladimir Temlyakov Sampling discretization of integral norms. Lecture 3

  12. Marcinkiewicz problem Let Ω be a compact subset of R d with the probability measure µ . We say that a linear subspace X N of the L q (Ω), 1 ≤ q < ∞ , admits the Marcinkiewicz-type discretization theorem with parameters m and q if there exist a set { ξ ν ∈ Ω , ν = 1 , . . . , m } and two positive constants C j ( d , q ), j = 1 , 2, such that for any f ∈ X N we have m q ≤ 1 | f ( ξ ν ) | q ≤ C 2 ( d , q ) � f � q � C 1 ( d , q ) � f � q q . (1) m ν =1 Vladimir Temlyakov Sampling discretization of integral norms. Lecture 3

  13. Marcinkiewicz problem Let Ω be a compact subset of R d with the probability measure µ . We say that a linear subspace X N of the L q (Ω), 1 ≤ q < ∞ , admits the Marcinkiewicz-type discretization theorem with parameters m and q if there exist a set { ξ ν ∈ Ω , ν = 1 , . . . , m } and two positive constants C j ( d , q ), j = 1 , 2, such that for any f ∈ X N we have m q ≤ 1 | f ( ξ ν ) | q ≤ C 2 ( d , q ) � f � q � C 1 ( d , q ) � f � q q . (1) m ν =1 In the case q = ∞ we define L ∞ as the space of continuous on Ω functions and ask for 1 ≤ ν ≤ m | f ( ξ ν ) | ≤ � f � ∞ . C 1 ( d ) � f � ∞ ≤ max (2) Vladimir Temlyakov Sampling discretization of integral norms. Lecture 3

  14. Marcinkiewicz problem Let Ω be a compact subset of R d with the probability measure µ . We say that a linear subspace X N of the L q (Ω), 1 ≤ q < ∞ , admits the Marcinkiewicz-type discretization theorem with parameters m and q if there exist a set { ξ ν ∈ Ω , ν = 1 , . . . , m } and two positive constants C j ( d , q ), j = 1 , 2, such that for any f ∈ X N we have m q ≤ 1 | f ( ξ ν ) | q ≤ C 2 ( d , q ) � f � q � C 1 ( d , q ) � f � q q . (1) m ν =1 In the case q = ∞ we define L ∞ as the space of continuous on Ω functions and ask for 1 ≤ ν ≤ m | f ( ξ ν ) | ≤ � f � ∞ . C 1 ( d ) � f � ∞ ≤ max (2) We will also use a brief way to express the above property: the M ( m , q ) theorem holds for a subspace X N or X N ∈ M ( m , q ). Vladimir Temlyakov Sampling discretization of integral norms. Lecture 3

  15. Marcinkiewicz problem with weights We say that a linear subspace X N of the L q (Ω), 1 ≤ q < ∞ , admits the weighted Marcinkiewicz-type discretization theorem with parameters m and q if there exist a set of knots { ξ ν ∈ Ω } , a set of weights { λ ν } , ν = 1 , . . . , m , and two positive constants C j ( d , q ), j = 1 , 2, such that for any f ∈ X N we have m λ ν | f ( ξ ν ) | q ≤ C 2 ( d , q ) � f � q � C 1 ( d , q ) � f � q q ≤ q . (3) ν =1 Vladimir Temlyakov Sampling discretization of integral norms. Lecture 3

  16. Marcinkiewicz problem with weights We say that a linear subspace X N of the L q (Ω), 1 ≤ q < ∞ , admits the weighted Marcinkiewicz-type discretization theorem with parameters m and q if there exist a set of knots { ξ ν ∈ Ω } , a set of weights { λ ν } , ν = 1 , . . . , m , and two positive constants C j ( d , q ), j = 1 , 2, such that for any f ∈ X N we have m λ ν | f ( ξ ν ) | q ≤ C 2 ( d , q ) � f � q � C 1 ( d , q ) � f � q q ≤ q . (3) ν =1 Then we also say that the M w ( m , q ) theorem holds for a subspace X N or X N ∈ M w ( m , q ). Obviously, X N ∈ M ( m , q ) implies that X N ∈ M w ( m , q ). Vladimir Temlyakov Sampling discretization of integral norms. Lecture 3

  17. Marcinkiewicz problem with ε We write X N ∈ M ( m , q , ε ) if (1) holds with C 1 ( d , q ) = 1 − ε and C 2 ( d , q ) = 1 + ε . Respectively, we write X N ∈ M w ( m , q , ε ) if (3) holds with C 1 ( d , q ) = 1 − ε and C 2 ( d , q ) = 1 + ε . Vladimir Temlyakov Sampling discretization of integral norms. Lecture 3

Recommend


More recommend