recent advances in compressed sensing techniques for the
play

Recent advances in compressed sensing techniques for the numerical - PowerPoint PPT Presentation

Recent advances in compressed sensing techniques for the numerical approximation of PDEs Simone Brugiapaglia Simon Fraser University, Canada simone_brugiapaglia@sfu.ca Joint work with Ben Adcock (SFU), Stefano Micheletti (MOX), Fabio Nobile


  1. Recent advances in compressed sensing techniques for the numerical approximation of PDEs Simone Brugiapaglia Simon Fraser University, Canada simone_brugiapaglia@sfu.ca Joint work with Ben Adcock (SFU), Stefano Micheletti (MOX), Fabio Nobile (EPFL), Simona Perotto (MOX), Clayton G. Webster (ONL). QUIET 2017 SISSA. Trieste, Italy – July 20, 2017

  2. Compressed sensing CS for (parametric) PDEs Inside the black box Outside the black box Conclusions 0

  3. Compressed Sensing (CS) Pioneering papers: [Donoho, 2006; Candès, Romberg, & Tao, 2006] Main ingredients: ◮ Sparsity / Compressibility; ◮ Random measurements (sensing); ◮ Sparse recovery. Sparsity: Let s ∈ C N be an s -sparse w.r.t. a basis Ψ : s = { z ∈ C N : � z � 0 ≤ s } , x ∈ Σ N s = Ψ x and where � x � 0 := # { i : x i � = 0 } and s ≪ N . Compressibility: fast decay of the best s -term approximation error � x − z � p ≤ Cs − α , σ s ( x ) p = inf z ∈ Σ N s for some C, α > 0 , where . 1

  4. Sensing In order to acquire s , we perform m ∼ s · polylog ( N ) linear nonadaptive random measurements � s , ϕ i � =: y i , for i = 1 , . . . , m. If we consider the matrix Φ = [ ϕ i ] ∈ C N × m , we have Ax = y , where A = Φ ∗ Ψ ∈ C m × N and y ∈ C m . This system is highly underdetermined . Φ ∗ y Ψ x * f Φ Ψ u = sensing matrix measurements vector sparsity basis unknown sparse signal 2

  5. Sparse recovery Thanks to the sparsity / compressibility of s , we can resort to sparse recovery techniques . We aim at approximating the solution to (P 0 ) z ∈ C N � z � 0 , min s.t. Az = y . � In general, (P 0 ) is a NP-hard problem... � There are computationally tractable strategies to approximate it! In particular, it is possible to employ ◮ greedy strategies , e.g. Orthogonal Matching Pursuit ( OMP ); ◮ convex relaxation , e.g., the quadratically-constrained basis pursuit (QCBP) program: z ∈ C N � z � 1 , min s.t. � Az − y � 2 ≤ η, referred to as Basis pursuit (BP) when η = 0 . 3

  6. Restricted isometry property Many important recovery results in CS are based on the Restricted Isometry Property ( RIP ). Definition (RIP) A matrix A ∈ C m × N satisfies the RIP ( s, δ ) with δ ∈ [0 , 1) if (1 − δ ) � z � 2 2 ≤ � Az � 2 2 ≤ (1 + δ ) � z � 2 ∀ z ∈ Σ N 2 , s . The RIP implies recovery results for: ◮ OMP [Zhang, 2011; Cohen, Dahmen, DeVore, 2015]; ◮ QCBP [Candés, Romberg, Tao, 2006], [Foucart, Rauhut; 2013]; Optimal recovery error estimates (without noise) for a decoder ∆ look like [Cohen, Dahmen, DeVore, 2009] � x − ∆( Ax ) � 2 � σ s ( x ) 1 ∀ x ∈ C N , √ s , and hold with high probability. 4

  7. Compressed sensing CS for (parametric) PDEs Inside the black box Outside the black box Conclusions 4

  8. CS as a tool to solve PDEs Parametric PDEs’ setting: ◮ z ∈ D ⊆ R d : parametric domain, d ≫ 1 ; ◮ L z u z = g : PDE; ◮ z �→ u z : solution map (the “black box”); ◮ u z �→ Q ( u z ) : quantity of interest. Can we take advantage of the CS paradigm in this setting? 5

  9. CS as a tool to solve PDEs Parametric PDEs’ setting: ◮ z ∈ D ⊆ R d : parametric domain, d ≫ 1 ; ◮ L z u z = g : PDE; ◮ z �→ u z : solution map (the “black box”); ◮ u z �→ Q ( u z ) : quantity of interest. Can we take advantage of the CS paradigm in this setting? YES! At least in two ways, addressed in this talk: 1. Inside the black box, to approximate z �→ u z 2. Outside the black box, to approximate z �→ f ( z ) = Q ( u z ) 5

  10. Compressed sensing CS for (parametric) PDEs Inside the black box Outside the black box Conclusions 5

  11. CS inside the black box Consider the weak formulation of a PDE find u ∈ U : a ( u, v ) = F ( v ) , ∀ v ∈ V, and its Petrov-Galerkin ( PG ) discretization [Aziz, Babuška, 1972]. Motivation to apply CS: ◮ reduce the computational cost associated with a classical PG discretization; ◮ situations with a limited budget of evaluations of F ( · ) ; ◮ deeper theoretical understanding of the PG method. Case study: ✎ Advection-diffusion-reaction (ADR) equation , with U = V = H 1 0 (Ω) , Ω = [0 , 1] d , and a ( u, v ) = ( η ∇ u, ∇ v ) + ( b · ∇ u, v ) + ( ρu, v ) , F ( v ) = ( f, v ) . 6

  12. Related literature Ancestors: PDE solvers based on ℓ 1 -minimization 1988 [J. Lavery, 1988; J. Lavery, 1989] Inviscid Burgers’ equation, conservation laws 2004 [J.-L. Guermond, 2004; J.-L. Guermond and B. Popov, 2009] Hamilton-Jacobi, transport equation CS techniques for PDEs 2010 [S. Jokar, V. Mehrmann, M. Pfetsch, and H. Yserentant, 2010] Recursive mesh refinement based on CS (Poisson equation) 2015 [S. B., S. Micheletti, S. Perotto, 2015; S. B., F. Nobile, S. Micheletti, S. Perotto, 2017] CORSING for ADR problems 7

  13. The Petrov-Galerkin method Choose U N ⊆ H 1 0 (Ω) and V M ⊆ H 1 0 (Ω) with U N = span { ψ 1 , . . . , ψ N V M = span { ϕ 1 , . . . , ϕ M } , } � �� � � �� � tests trials Then we can discretize the weak problem as Ax = y , A ij = a ( ψ j , ϕ i ) , y i = F ( ϕ i ) with A ∈ C M × N , y ∈ C M . 8

  14. The Petrov-Galerkin method Choose U N ⊆ H 1 0 (Ω) and V M ⊆ H 1 0 (Ω) with U N = span { ψ 1 , . . . , ψ N V M = span { ϕ 1 , . . . , ϕ M } , } � �� � � �� � tests trials Then we can discretize the weak problem as Ax = y , A ij = a ( ψ j , ϕ i ) , y i = F ( ϕ i ) with A ∈ C M × N , y ∈ C M . We can establish the following analogy: Petrov-Galerkin method: Sampling: solution of a PDE ⇐ ⇒ signal tests (bilinear form) measurements (inner product) 8

  15. Classical case: square matrices When dealing with Petrov-Galerkin discretizations, one usually ends up with a big square matrix. ψ 1 ψ 2 ψ 3 ψ 4 ψ 5 ψ 6 ψ 7 ↓ ↓ ↓ ↓ ↓ ↓ ↓       ϕ 1 → × × × × × × × u 1 F ( ϕ 1 ) ϕ 2 → × × × × × × × u 2 F ( ϕ 2 )             ϕ 3 → × × × × × × × F ( ϕ 3 ) u 3             ϕ 4 → × × × × × × × u 4 = F ( ϕ 4 )             ϕ 5 → × × × × × × × F ( ϕ 5 ) u 5             ϕ 6 → × × × × × × × u 6 F ( ϕ 6 )       ϕ 7 → × × × × × × × F ( ϕ 7 ) u 7 � �� � a ( ψ j ,ϕ i ) 9

  16. “Compressing” the discretization We would like to use only m random tests instead of N , with m ≪ N ... ψ 1 ψ 2 ψ 3 ψ 4 ψ 5 ψ 6 ψ 7 ↓ ↓ ↓ ↓ ↓ ↓ ↓       ϕ 1 → × × × × × × × u 1 F ( ϕ 1 ) ϕ 2 → × × × × × × × u 2 F ( ϕ 2 )             ϕ 3 → × × × × × × × F ( ϕ 3 ) u 3             ϕ 4 → × × × × × × × u 4 = F ( ϕ 4 )             ϕ 5 → × × × × × × × F ( ϕ 5 ) u 5             ϕ 6 → × × × × × × × u 6 F ( ϕ 6 )       ϕ 7 → × × × × × × × F ( ϕ 7 ) u 7 � �� � a ( ψ j ,ϕ i ) 10

  17. Sparse recovery ...in order to obtain a reduced discretization . ψ 1 ψ 2 ψ 3 ψ 4 ψ 5 ψ 6 ψ 7 ↓ ↓ ↓ ↓ ↓ ↓ ↓ � ×   � F ( ϕ 2 ) � � u 1 ϕ 2 → × × × × × × = u 2 ϕ 5 → × × × × × × ×   F ( ϕ 5 )   u 3 � �� �     a ( ψ j ,ϕ i ) u 4     u 5     u 6   u 7 The solution is then computed using sparse recovery techniques. 11

  18. CORSING (COmpRessed SolvING) First, we define the local a -coherence [Krahmer, Ward, 2014; B., Nobile, Micheletti, Perotto, 2017]: µ N | a ( ψ j , ϕ q ) | 2 , q := sup ∀ q ∈ N . j ∈ [ N ] COSRING algorithm: 1. Define a truncation level M and a number of measurements m ; 2. Draw τ 1 , . . . , τ m independently at random from [ M ] according to the probability p ∼ ( µ N 1 , . . . , µ N M ) (up to rescaling). 3. Build A ∈ R m × N , y ∈ R m and D ∈ R m × m , defined as: δ ik A ij := a ( ψ j , ϕ τ i ) , f i := F ( ϕ τ i ) , D ik := √ mp τ i . z ∈ R N � D ( Az − y ) � 2 4. Use OMP to solve min 2 , s.t. � z � 0 ≤ s ; 12

  19. Sparsity + Sensing: How to choose { ψ j } and { ϕ i } ? Heuristic criterion commonly used in CS: space vs. frequency . Hierarchical hat functions Sine functions [Smoliak, Dahmen, Griebel, Yserentant, Zienkiewicz, ...] 0.5 H 0,0 0.5 0.4 0.3 S 2 0.4 H 1,0 H 1,1 S 1 0.2 S 5 0.3 0.1 S 3 H 2,0 H 2,1 H 2,2 H 2,3 0 S 4 0.2 −0.1 0.1 −0.2 −0.3 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 S H We name the corresponding strategies CORSING HS and SH . 13

  20. Homogeneous 1D Poisson problem CORSING HS TS := N − m · 100% ≈ 85% N = 8191 , s = 50 , m = 1200 . � Test Savings : N 3.5 3.5 exact 3.45 3 corsing 3.4 2.5 3.35 2 3.3 1.5 3.25 1 3.2 exact 0.5 3.15 corsing 0 3.1 0 0.2 0.4 0.6 0.8 1 0.38 0.39 0.4 0.41 0.42 × = hat functions selected by OMP Level-based ordering ( log 10 | � u ℓ,k | ) 14

Recommend


More recommend