uncertainty quantification using random matrix theory
play

Uncertainty Quantification Using Random Matrix Theory S Adhikari - PowerPoint PPT Presentation

Uncertainty Quantification Using Random Matrix Theory S Adhikari and D Tartakovsky (UCSD) School of Engineering, University of Wales Swansea, Swansea, U.K. Email: S.Adhikari@swansea.ac.uk URL: http://engweb.swan.ac.uk/ adhikaris ETH Z


  1. Uncertainty Quantification Using Random Matrix Theory S Adhikari and D Tartakovsky (UCSD) School of Engineering, University of Wales Swansea, Swansea, U.K. Email: S.Adhikari@swansea.ac.uk URL: http://engweb.swan.ac.uk/ ∼ adhikaris ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.1/38

  2. Outline Motivation Current methods for response UQ Matrix variate probability density functions Inverse of a general real symmetric random matrix Response moments Numerical example Conclusions ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.2/38

  3. Background In many stochastic mechanics problems we need to solve a system of linear stochastic equations: Ku = f . (1) K ∈ R n × n is a n × n real non-negative definite random matrix, f ∈ R n is a n -dimensional real deterministic input vector and u ∈ R n is a n -dimensional real uncertain output vector which we want to determine. This typically arise due to the discretisation of stochastic partial differential equations (eg. in the stochastic finite element method) ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.3/38

  4. Background In the context of linear structural mechanics, K is known as the stiffness matrix, f is the forcing vector and u is the vector of structural displacements. Often, the objective is to determine the probability density function (pdf) and consequently the cumulative distribution function (cdf) of u . This will allow one to calculate the reliability of the system. It is generally difficult to obtain the probably density function (pdf) of the response. As a consequence, engineers often intend to obtain only the fist few moments (typically the fist two) of the response quantity. ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.4/38

  5. Objectives We propose an exact analytical method for the inverse of a real symmetric (in general non-Gaussian) random matrix of arbitrary dimension. The method is based on random matrix theory and utilizes the Jacobian of the underlying nonlinear matrix transformation. Exact expressions for the mean and covariance of the response vector is obtained in closed-form. ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.5/38

  6. Current Approaches The random matrix can be represented as K = K 0 + ∆K (2) K 0 ∈ R n × n is the deterministic part and the random part: m m m � � � ξ j K I ξ j ξ l K II ∆K = j + jl + · · · (3) j =1 j =1 l =1 m is the number of random variables, K I j , K II jl ∈ R n × n , ∀ j, l are deterministic matrices and ξ j , ∀ j are real random variables. ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.6/38

  7. Perturbation based approach Represent the response as m m u = u 0 + ξ j u I � � ξ j ξ l u II j + jl + · · · . (4) j =1 l =1 where u 0 = K 0 − 1 f (5) j = − K 0 − 1 K I u I j u 0 , ∀ j (6) jl u 0 + K I jl = − K 0 − 1 [ K II u II j u I l + K I l u I and j ] , ∀ j, l. (7) ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.7/38

  8. Neumann expansion � � � K 0 − 1 ∆K Provided F < 1 , � � � � − 1 K − 1 = � K 0 ( I n + K 0 − 1 ∆K ) = K 0 − 1 − K 0 − 1 ∆KK 0 − 1 + K 0 − 1 ∆KK 0 − 1 ∆KK 0 − 1 + · · · . Therefore, u = K − 1 f = u 0 − Tu 0 + T 2 u 0 + · · · (8) where T = K 0 − 1 ∆K ∈ R n × n is a random matrix. ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.8/38

  9. Projection methods Here one ‘projects’ the solution vector onto a complete stochastic basis. Depending on how the basis is selected, several methods are proposed. Using the classical Polynomial Chaos (PC) projection scheme P − 1 � u = u j Ψ j ( ξ ) (9) j =0 where u j ∈ R n , ∀ j are unknown vectors and Ψ j ( ξ ) are multidimensional Hermite polynomials in ξ r . ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.9/38

  10. A partial summary Methods Sub-methods First and second order perturbation 1,2 , 1. Perturbation Neumann expansion 3,4 , based methods improved perturbation method 5 . Polynomial chaos expansion 6 , 2. Projection methods random eigenfunction expansion 4 , stochastic reduced basis method 7–9 , Wiener − Askey chaos expansion 10–12 , domain decomposition method 13,14 . Simulation methods 15,16 , 3. Monte carlo simulation Analytical methods in references 17–21 , and other methods Exact solutions for beams 22,23 . ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.10/38

  11. The Central Idea of the Proposed Work We aim to obtain moments of u (or part of u ) from a system of linear stochastic equations Ku = f (10) considering the fact that we ‘know’ the probability density function of the matrix K . For different physical problems (e.g., flow through stochastic porous media, circuit theory, static structural mechanics etc.) the matrix K (i.e., its pdf) can be obtained in a manner specific to the discipline. The ‘final problem’ to be solved is however, the same and is the central topic of this paper. ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.11/38

  12. Matrix variate distributions The probability density function of a random matrix can be defined in a manner similar to that of a random variable. If A is an n × m real random matrix, the matrix variate probability density function of A ∈ R n,m , denoted as p A ( A ) , is a mapping from the space of n × m real matrices to the real line, i.e., p A ( A ) : R n,m → R . ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.12/38

  13. Gaussian random matrix The random matrix X ∈ R n,p is said to have a matrix variate Gaussian distribution with mean matrix M ∈ R n,p and covariance matrix Σ ⊗ Ψ , where Σ ∈ R + n and Ψ ∈ R + p provided the pdf of X is given by p X ( X ) = (2 π ) − np/ 2 | Σ | − p/ 2 | Ψ | − n/ 2 � � − 1 2 Σ − 1 ( X − M ) Ψ − 1 ( X − M ) T etr (11) This distribution is usually denoted as X ∼ N n,p ( M , Σ ⊗ Ψ ) . ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.13/38

  14. Symmetric Gaussian matrix If Y ∈ R n × n is a symmetric Gaussian random matrix then its pdf is given by � − 1 / 2 p Y ( Y ) = (2 π ) − n ( n +1) / 4 � � B T � n ( Σ ⊗ Ψ ) B n � � − 1 2 Σ − 1 ( Y − M ) Ψ − 1 ( Y − M ) T etr . (12) This is denoted as Y = Y T ∼ SN n,n M , B T � � n ( Σ ⊗ Ψ ) B n . The elements of the translation matrix B n ∈ R n 2 × n ( n +1) / 2 are: ( B n ) ij,gh = 1 2 ( σ ig σ jh + σ ih σ jg ) , i ≤ n, j ≤ n, g ≤ h ≤ n, (13) ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.14/38

  15. Matrix variate Gamma distribution A n × n symmetric positive definite matrix random W is said to have a matrix variate gamma distribution with parameters a and Ψ ∈ R + n , if its pdf is given by 2 ( n +1) etr {− ΨW } ; ℜ ( a ) > 1 Γ n ( a ) | Ψ | − a � − 1 | W | a − 1 � p W ( W ) = 2( n − 1) This distribution is usually denoted as W ∼ G n ( a, Ψ ) . Here the multivariate gamma function: n � � a − 1 1 � 4 n ( n − 1) Γ n ( a ) = π Γ 2( k − 1) ; for ℜ ( a ) > ( n − 1) / 2 k =1 ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.15/38

  16. Wishart matrix A n × n symmetric positive definite random matrix S is said to have a Wishart distribution with parameters p ≥ n and Σ ∈ R + n , if its pdf is given by � − 1 � � 1 � � � − 1 2 np Γ n 1 1 1 2 p 2 ( p − n − 1) etr 2 Σ − 1 S p S ( S ) = 2 2 p | Σ | | S | (14) This distribution is usually denoted as S ∼ W n ( p, Σ ) . Note: G n ( a, Ψ ) = W n (2 a, Ψ − 1 / 2) , so that Gamma and Wishart are equivalent distributions. ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.16/38

  17. Some books Muirhead, Aspects of Multivariate Statistical Theory, John Wiely and Sons, 1982. Mehta, Random Matrices, Academic Press, 1991. Gupta and Nagar, Chapman & Hall/CRC, 2000. ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.17/38

  18. Inverse of a scalar ku = f (15) where k, u, f ∈ R . Suppose the pdf k is p k ( k ) and we are interested in deriving the pdf of h = k − 1 . (16) The Jacobian of the above transformation ˛ ∂h ˛ ˛ = | k | − 2 . ˛ − k − 2 ˛ ˛ ˛ ˛ J = ˛ = (17) ˛ ˛ ∂k ˛ Using the Jacobian, the pdf of h can be obtained as p h ( h )( dh ) = p k ( k )( dk ) (18) 1 or p h ( h ) = p k ( k ) (19) ˛ ˛ ˛ ∂h ˛ ˛ ∂k ˛ 1 = | h | − 2 p k k = h − 1 ´ h − 1 ´ ` ` or p h ( h ) = J ( k = h − 1 ) p k . (20) ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.18/38

  19. The case of n × n matrices Suppose the matrix variate probability density function of the non-singular matrix K is given by p K ( K ) : R n × n → R . Our interest is in the pdf (i.e joint pdf of the elements) of H = K − 1 ∈ R n × n . (21) The elements of H are complicated non-linear function of the elements of K (i.e. even if the elements of K are joint Gaussian, the elements of H will not be joint Gaussian). H may not have any banded structure even if K is of banded nature. ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.19/38

  20. Pdf transforation in matrix space The procedure to obtain the pdf of H is very similar to that of the univariate case: p H ( H ) ( dH ) = p K ( K )( dK ) (22) 1 or p H ( H ) = p K ( K ) (23) � � � dH � � dK � 1 K = H − 1 � � or p H ( H ) = K = H − 1 � p K (24) � J = | H | − ( n +1) p K H − 1 � � . (25) For the univariate case ( n = 1 ) Eq. (24) reduces to the familiar equivalent expression obtained in Eq. (19). ETH Z¨ urich, 19 July 2007 UQ Using RMT – p.20/38

Recommend


More recommend