mlss 06 canberra
play

MLSS 06 - Canberra Elements Hierarchical Basis Sparse Grids - PowerPoint PPT Presentation

MLSS 06 - Canberra Sparse Grids Jochen Garcke (Very) Short Course on Finite MLSS 06 - Canberra Elements Hierarchical Basis Sparse Grids Sparse Grids Combination Technique Jochen Garcke Regression / Classification via Function


  1. MLSS 06 - Canberra Sparse Grids Jochen Garcke (Very) Short Course on Finite MLSS 06 - Canberra Elements Hierarchical Basis Sparse Grids Sparse Grids Combination Technique Jochen Garcke Regression / Classification via Function Reconstruction Centre for Mathematics and its Applications Opticom Mathematical Sciences Institute Australian National University Semi-supervised Learning Outlook: 14th February 2006 Dimension Adaptive Combination Technique Sparse Grids in Reinforcement Learning ?

  2. MLSS 06 - Outline Canberra Sparse Grids Jochen Garcke (Very) Short Course on Finite Elements (Very) Short Course on Finite Elements Hierarchical Basis Hierarchical Basis Sparse Grids Sparse Grids Combination Technique Regression / Combination Technique Classification via Function Reconstruction Opticom Regression / Classification via Function Reconstruction Semi-supervised Learning Opticom Outlook: Dimension Adaptive Combination Technique Semi-supervised Learning Sparse Grids in Reinforcement Learning ? Outlook: Dimension Adaptive Combination Technique

  3. MLSS 06 - Partial Differential Equations Canberra Sparse Grids ◮ Poisson equation (model problem) Jochen Garcke electric potential u for a given charge f (Very) Short Course on Finite −△ u (= −∇ 2 u ) = f Elements Hierarchical Basis ◮ Navier-Stokes-equation describes motion of fluid Sparse Grids substances like liquids and gases Combination Technique ∂ u ∂ t + u ∇ u − 1 Re △ u + ∇ p = f Regression / Classification via Function ∇ · u = 0 Reconstruction Opticom ◮ Schrödinger-equation (quantum chemistry) Semi-supervised eigenvalue problem H ψ = λψ with Learning Outlook: N K � 2 � 2 Dimension � � H = − 2 m ∆ i − ∆ α Adaptive 2 M α Combination α i Technique N , K Sparse Grids in N K e 2 eZ α Z α Z β Reinforcement � � � − + + . Learning ? r α i r ij R αβ i ,α i < j α<β

  4. MLSS 06 - Galerkin-Variational Principle Canberra Sparse Grids Jochen Garcke (Very) Short ◮ minimise J ( v ) = 1 Course on Finite 2 a ( v , v ) − � f , v � with v ∈ V Elements ◮ simplified think as: Hierarchical Basis � Lu = f → a ( u , v ) = Lu · v = � f , v � Sparse Grids Combination ◮ for model problem −△ u → � ∇ u ∇ u Technique ◮ minimum u of J equivalent to find u ∈ V which Regression / Classification via satisfies Function Reconstruction a ( u , v ) = � f , v � ∀ v ∈ V Opticom Semi-supervised ◮ Lax-Milgram-Lemma: V Hilbertspace, f bounded and Learning a is bounded ( | a ( u , v ) | ≤ C � u � V � v � v ) and V -elliptic Outlook: Dimension ( C E � u � 2 V ≤ a ( u , u ) ∀ u ) exists unique solution u Adaptive Combination ◮ weak solution to original partial differential equation Technique Sparse Grids in Reinforcement Learning ?

  5. MLSS 06 - Discretisation Canberra Sparse Grids Jochen Garcke (Very) Short Course on Finite Elements ◮ discretise: V N ⊂ V , V N finite-dimensional Hierarchical Basis ◮ find u N ∈ V N which satisfies Sparse Grids Combination Technique a ( u N , v N ) = f ( v N ) ∀ v N ∈ V N Regression / Classification via ◮ Cea-Lemma: a is V -elliptic, u , u N solutions in V , V N , Function Reconstruction respectively, it then holds Opticom Semi-supervised Learning � u − u N � V ≤ C inf � u − v N � V v n ∈ V N Outlook: Dimension Adaptive Combination Technique Sparse Grids in Reinforcement Learning ?

  6. MLSS 06 - Example for V N in One Dimension Canberra Sparse Grids Jochen Garcke ◮ one-dimensional basis for level 3 (Very) Short Course on Finite Elements Hierarchical Basis Sparse Grids Combination Technique Regression / ◮ interpolation of parabola Classification via Function Reconstruction Opticom Semi-supervised Learning Outlook: Dimension Adaptive Combination Technique φ 3 , 4 φ 3 , 2 φ 3 , 6 Sparse Grids in φ 3 , 1 φ 3 , 3 φ 3 , 5 φ 3 , 7 Reinforcement Learning ?

  7. MLSS 06 - One-dimensional Basis Functions Canberra Sparse Grids Jochen Garcke (Very) Short ◮ one-dimensional basis functions φ l , j ( x ) with support Course on Finite Elements [ x l , j − h l , x l , j + h l ] ∩ [ 0 , 1 ] = [( j − 1 ) h l , ( j + 1 ) h l ] ∩ [ 0 , 1 ] Hierarchical Basis are defined by: Sparse Grids � 1 − | x / h l − j | , Combination x ∈ [( j − 1 ) h l , ( j + 1 ) h l ] ∩ [ 0 , 1 ]; Technique φ l , j ( x ) = Regression / 0 , otherwise . Classification via Function Reconstruction Opticom Semi-supervised Learning Outlook: Dimension Adaptive Combination Technique Sparse Grids in Reinforcement Learning ?

  8. MLSS 06 - Basis Functions in More Dimensions Canberra Sparse Grids ◮ d -dimensional piecewise d -linear hat functions Jochen Garcke (Very) Short d Course on Finite � φ l , j ( x ) := φ l t , j t ( x t ) . Elements Hierarchical Basis t = 1 Sparse Grids Combination Technique Regression / Classification via Function Reconstruction Opticom Semi-supervised Learning Outlook: Dimension Adaptive ◮ associated function space V l of piecewise d -linear Combination Technique functions Sparse Grids in Reinforcement V l := span { φ l , j | j t = 0 , . . . , 2 l t , t = 1 , . . . , d } Learning ?

  9. � MLSS 06 - Some Notation Canberra Sparse Grids ◮ simplification: domain ¯ Ω := [ 0 , 1 ] d Jochen Garcke d denotes a multi-index ◮ l = ( l 1 , . . . , l d ) ∈ (Very) Short Course on Finite ◮ define mesh size h l := ( 2 − l 1 , . . . , 2 − l d ) Elements ◮ anisotropic grid Ω l on ¯ Ω Hierarchical Basis Sparse Grids ◮ different, but equidistant mesh sizes Combination ◮ Ω l consists of the points Technique Regression / Classification via x l , j := ( x l 1 , j 1 , . . . , x l d , j d ) , Function Reconstruction Opticom with x l t , j t := j t · h l t = j t · 2 − l t and j t = 0 , . . . , 2 l t Semi-supervised Learning Outlook: Dimension Adaptive Combination Technique Sparse Grids in Reinforcement Learning ? Ω 3 , 1

  10. MLSS 06 - Triangulation Instead of Tensor Product Canberra Sparse Grids Jochen Garcke (Very) Short Course on Finite Elements Hierarchical Basis Sparse Grids Combination Technique Regression / Classification via Function Reconstruction Opticom Semi-supervised Learning Outlook: Dimension Adaptive Combination Technique Sparse Grids in Reinforcement Learning ?

  11. MLSS 06 - Approximation Properties Canberra Sparse Grids Jochen Garcke ∂ | α | ◮ D α u = d u (Very) Short α d α 1 ∂ x ··· ∂ x Course on Finite 1 ◮ Sobolev spaces H s with norm Elements Hierarchical Basis � Sparse Grids � u � 2 � ( D α u ) 2 H s = Combination Technique | α |≤ s Regression / Classification via ◮ a V-elliptic, V N piecewise (bi)linear, and u ∈ H 2 Function Reconstruction Opticom � u − u N � H 1 ≤ Ch | u | H 2 Semi-supervised Learning ◮ error in L 2 Outlook: Dimension � u − u N � L 2 ≤ Ch 2 | u | H 2 Adaptive Combination Technique ◮ above results are in two dimensions, similar results in Sparse Grids in higher dimensions Reinforcement Learning ?

  12. MLSS 06 - Interpolation with Hierarchical Basis Canberra Sparse Grids Jochen Garcke (Very) Short Course on Finite Elements Hierarchical Basis Sparse Grids φ 3 , 4 φ 3 , 2 φ 3 , 6 φ 3 , 1 φ 3 , 3 φ 3 , 5 φ 3 , 7 Combination Technique nodal basis V 1 ⊂ V 2 ⊂ V 3 Regression / Classification via Function Reconstruction Opticom Semi-supervised Learning Outlook: Dimension φ 1 , 1 Adaptive φ 2 , 1 φ 2 , 3 φ 3 , 1 φ 3 , 3 φ 3 , 5 φ 3 , 7 Combination Technique � W 2 � V 1 hierarchical basis V 3 = W 3 Sparse Grids in Reinforcement Learning ?

  13. � � MLSS 06 - Hierarchical Difference Spaces Canberra Sparse Grids Jochen Garcke d denotes the level, i.e. the discretisation ◮ l ∈ (Very) Short resolution, of a grid Ω l , a space V l or a function f l Course on Finite Elements d gives the position of a grid point x l , j or the ◮ j ∈ Hierarchical Basis corresponding basis function φ l , j ( · ) Sparse Grids Combination ◮ hierarchical difference space W l via Technique Regression / Classification via d Function � W l := V l \ V l − e t , (1) Reconstruction Opticom t = 1 Semi-supervised Learning where e t is the t -th unit vector Outlook: ◮ In other words, W l consists of all φ k , j ∈ V l which are Dimension Adaptive not included in any of the spaces V k smaller than V l Combination Technique ◮ to complete the definition, we formally set V l := 0, if Sparse Grids in Reinforcement l t = − 1 for at least one t ∈ { 1 , . . . , d } Learning ?

Recommend


More recommend