an efficient computational solution scheme of the random
play

An Efficient Computational Solution Scheme of the Random Eigenvalue - PowerPoint PPT Presentation

50th AIAA SDM Conference, 4-7 May 2009 An Efficient Computational Solution Scheme of the Random Eigenvalue Problems Rajib Chowdhury & Sondipon Adhikari School of Engineering Swansea University Swansea, UK Outline Introduction


  1. 50th AIAA SDM Conference, 4-7 May 2009 An Efficient Computational Solution Scheme of the Random Eigenvalue Problems Rajib Chowdhury & Sondipon Adhikari School of Engineering Swansea University Swansea, UK

  2. Outline � Introduction � Random Eigenvalue Problem � High Dimensional Model Representation (HDMR) � Examples � Conclusions 2

  3. Sources of uncertainty � (a) parametric uncertainty - e.g., uncertainty in geometric parameters, friction coefficient, strength of the materials involved; � (b) model inadequacy - arising from the lack of scientific knowledge about the model which is a-priori unknown; � (c) experimental error - uncertain and unknown error percolate into the model when they are calibrated against experimental results; � (d) computational uncertainty - e.g, machine precession, error tolerance and the so called ‘h’ and ‘p’ refinements in finite element analysis, 3

  4. Random Eigenvalue Problem ( ) ( ) ( ) ( ) + + = �� � Mx t Cx t Kx t p t • Due to the presence of uncertainties, mass, damping and stiffness matrices are random matrices. • The primary objectives are ♦ To quantify the uncertainties in system matrices. ♦ To estimate the variability of system responses. 4

  5. Random Eigenvalue Problem • Random eigenvalue of linear structural system ( ) ( ) ( ) ( ) ( ) K X Φ X = Λ X M X Φ X • Main issues ♦ To find probabilistic characteristics of eigenpair. ♦ To find the joint statistics (moments, correlation). ♦ Several approaches are available on random eigenvalue problem, which are based on ♦ Perturbation method (Boyce, 1968; Zhang & Ellingwood, 1995 ) ♦ Iteration method ( Boyce, 1968 ) ♦ Ritz method (Mehlhose, 1999) ♦ Crossing theory (Grigorie, 1992) ♦ Stochastic reduced basis (Nair & Keane, 2003) ♦ Asymptotic method (Adhikari, 2006) 5

  6. Perturbation Method In the mean-centered approach α is the mean of x Alternatively, α can be obtained such that the any moment of each eigenvalue is calculated most accurately 6

  7. 7 Multidim ensional I ntegrals

  8. 8 Multidim ensional I ntegrals

  9. 9 Multidim ensional I ntegrals

  10. 10 Mom ents of Eigenvalues

  11. 11 Mom ents of Eigenvalues

  12. 12 Mom ents of Eigenvalues

  13. Multivariate Gaussian Case 13

  14. 14 Maxim um Entropy pdf

  15. 15 Maxim um Entropy pdf

  16. Maxim um Entropy pdf � With three moments 16

  17. HDMR ∈ R ∈ R N x y x Output ( ) Input SYSTEM N N N ∑ ∑ ∑ = + + + + + + y x y y x y x x � y x � x � y x � x ( ) ( ) ( , ) ( , , ) ( , , ) i i i i i i i � i i i � N N 0 12 1 S s 1 2 1 2 1 1 �� � ��� = � = = i i i i � i 1 , 1 , , 1 First- S 1 2 1 < < < i i i � i S = 1 2 1 y x ����� ˆ ( � ������ � ) order 1 Second-order (2D cooperative effects) = y x ����������� � ������������ � ˆ ( ) 2 = y x ˆ ( ) S S-order (SD cooperative effects) Conjecture: Component functions arising in proposed decomposition will exhibit insignificant S -order effects cooperatively when S → N . 17 (Rabitz & Alis, 1999; Alis & Rabitz, 2001)

  18. HDMR • Lower-order Approximations First-order Approximation reference point N ∑ ≡ ≡ − − I I y x y x � x y c � c x c � c N y c ˆ ˆ ( ) ( , , ) ( , , , , , , ) ( 1) ( ) �� � ��� � ���� � − ����� + � N i i i N 1 1 1 1 = i 1 = = y x y ( ) i i 0 Second-order Approximation = y x x ( , ) �������� � ��������� � i i i i 1 2 1 2 N ∑ II ≡ II ≡ y x y x � x y c � c x c � c x c � c ˆ ( ) ˆ ( , , ) ( , , , , , , , , , , ) − + − + N i i i i i i N 1 1 1 1 1 1 1 1 1 2 2 2 = i i , 1 1 2 < i i 1 2 − − N N N ( 1)( 2) ∑ + − − + � � N y c c x c c y c ( 2) ( , , , , , , ) ( ) ������ � ������� − + � i i i N 1 1 1 2 ��� � � ��� � = i 1 = y x ( ) i i = y 0 n ∑ ≅ φ j y x x y c � c x c � c ( ) ( ) ( ) ( , , , , , , ) − + i i j i i i i N Interpolation 1 1 1 = j 1 function n n ∑∑ ≅ φ j j y x x x x y c � c x c � c x c � c ( ) ( ) ( , ) ( , ) ( , , , , , , , , , , ) 1 2 − + − + i i i i j j i i i i i i i i N 1 1 1 1 1 1 2 1 2 1 2 1 2 1 1 1 2 2 2 = = j j 1 1 2 1 18

  19. Convergence Issue • Two-dimensional Taylor Series Expansion ( )( ( )( ( )( ∂ ∂ ∂ y c c y c c y c c 2 , , , ( ) ( ) ) ) ) = + − + − + − 2 y x x y c c 1 2 x c 1 2 x c 1 2 x c , , ∂ ∂ ∂ 1 2 1 2 1 1 2 2 1 1 x x x 2 1 2 1 ( )( ( )( ∂ ∂ y c c y c c 2 2 , , ) )( ) + − 2 + − − + 1 2 x c 1 2 x c x c � ∂ ∂ ∂ x 2 2 x x 1 1 2 2 2 2 1 2 Taylor expansion at x 1 = c 1 and x 2 = c 2 • One-dimensional Taylor Series Expansion ( )( ( )( ∂ ∂ y c c y c c 2 , , ( ) ( ) ) ) = + − + − 2 + y x c y c c 1 2 x c 1 2 x c � , , ∂ ∂ 1 2 1 2 x 1 1 x 1 1 2 1 1 Taylor expansion at x 1 = c 1 ( )( ( )( ∂ ∂ y c c y c c 2 , , ( ) ( ) ) ) = + − + − 2 + � y c x g c c 1 2 x c 1 2 x c , , ∂ ∂ 1 2 1 2 x 2 2 x 2 2 2 2 2 Taylor expansion at x 2 = c 2 19 (Li et.al., 2001)

  20. Convergence Issue • Two-dimensional Taylor Series Expansion ( )( ( )( ( )( ∂ ∂ ∂ y c c y c c y c c 2 , , , ( ) ( ) ) ) ) = + − + − + − 2 y x x y c c 1 2 x c 1 2 x c 1 2 x c , , ∂ ∂ ∂ 1 2 1 2 1 1 2 2 1 1 x x x 2 1 2 1 ( )( ( )( ∂ ∂ y c c y c c 2 2 , , ) )( ) + − 2 + − − + 1 2 x c 1 2 x c x c � ∂ ∂ ∂ x 2 2 x x 1 1 2 2 2 2 1 2 2D cooperative effect • Sum of Two One-dimensional Taylor Series ( )( ( )( ∂ ∂ y c c y c c , , ( ) ( ) ( ) ( ) ) ) + − = + − + − y x c y c x y c c y c c 1 2 x c 1 2 x c , , , , ∂ ∂ 1 2 1 2 1 2 1 2 1 1 2 2 x x 1 2 ( )( ( )( ∂ ∂ y c c y c c 2 2 , , ) ) + − 2 + − 2 + x c x c � 1 2 1 2 ∂ ∂ 1 1 2 2 x x 2 2 1 2 20 (Li et.al., 2001)

  21. Errors in HDMR Approximation • Residual Error + ∞ ∞ ∂ j j y ( ) ( ) ( ) 1 ∑∑ ∑ j j 1 2 − = − − y x y x c x c 1 x c 2 ˆ ( ) ( ) i i i i ∂ ∂ j j j j x x ! ! 1 1 2 2 1 2 < j j i i i i 1 2 2 1 1 2 1 2 Exact Approximate • ŷ ( x ) represents reduced dimensional approximation, because only N number of 1-dimensional model approximation are required, as opposed to one N- dimensional approximation in y ( x ). • If higher partial derivatives are negligibly small, ŷ ( x ) provides a convenient approximation of y ( x ) • First-order HDMR expansion is the sum of all Taylor series terms, which contains only variable x i . Similarly, second- order HDMR expansion is the sum of all Taylor series terms, which contains only variable x i and x j . Therefore any truncated HDMR expansion provides better approximation of y ( x ) than any truncated Taylor series (e.g., FORM/SORM). 21 (Li et.al., 2001)

  22. HDMR (Continued) First-order Approximation Reference point N n ∑ ∑ ≅ φ − − j y x x y c � c x c � c N y c ( ) ˆ ( ) ( ) ( , , , , , , ) ( 1) ( ) − + j i i i i N 1 1 1 1 = = i j 1 1 coefficients coefficients x 2 Interpolation function c x c 1 One Variable x 1 Two Variable 22

  23. HDMR (Continued) Second-order Approximation N n n ∑ ∑∑ ≅ φ j j y x x x y c � c x c � c x c � c ( ) ( ) ˆ ( ) ( , ) ( , , , , , , , , , , ) 1 2 − + − + j j i i i i i i i i N 2 1 1 1 1 1 1 2 1 2 1 1 1 2 2 2 = = = i i j j , 1 1 1 1 2 2 1 < coefficients i i 1 2 − − N n N N ( 1)( 2) ∑ ∑ − − φ + j N x y c � c x c � c y c ( ) ( 2) ( ) ( , , , , , , ) ( ) − + j i i i i N 1 1 1 2 = = i j 1 1 x 2 c x 1 Two Variables 23

  24. HDMR (Continued) • Computational Effort (Calculating Coefficients) No. of FEA for a linear/nonlinear problem, y c ( ) 1 FEA j y c � c x c � c ( ) ( , , , , , , ) − + i i i N nN 1 1 1 = = � i � N j n ( 1, , ; 1, , ) FEA j j y c � c x c � c x c � c ( ) ( ) ( , , , , , , , , , , ) 1 2 − + − + i i i i i i N 1 1 1 1 1 N ( N -1) n 2 /2 1 1 1 2 2 2 = = i i � N j j � n ( , 1, , ; , 1, , ) FEA 1 2 1 2 First-order: ( n -1) N + 1 (linear) Second-order: N ( N -1)( n -1) 2 /2 + ( n -1) N + 1 (quadratic) (Chowdhury & Rao, 2009) 24

Recommend


More recommend