THE NATURE OF RANDOM SYSTEM MATRICES IN STRUCTURAL DYNAMICS S. Adhikari Department of Engineering University of Cambridge Trumpington Street Cambridge CB2 1PZ (U.K.) May 2001
Outline of the Talk • Introduction • System randomness: Probabilistic approach • Parametric and non-parametric modeling • Maximum entropy principle • Gaussian Orthogonal Ensembles (GOE) • Random rod example • Conclusions 1
Random Systems Equations of motion: M ¨ y ( t ) + C ˙ y ( t ) + Ky ( t ) = p ( t ) (1) where M , D and K are respectively the mass, damp- ing and stiffness matrices, y ( t ) is the vector of gen- eralized coordinates and p ( t ) is the applied forcing function. We consider randomness of the system matrices as M = M + δ M C = C + δ C (2) and K = K + δ K . Here, ( • ) and δ ( • ) denotes the nominal (determin- istic) and random parts of ( • ) respectively.
Probabilistic Approach 1. Parametric modeling: The Stochastic Finite Element Method (SFEM) • Probability density function p q ( q ) of random vectors q ∈ R l have to be constructed from the random fields describing the geometry, boundary conditions and constitutive equa- tions by discretization of the fields. q + q ); R l → R N × N , where • Mappings q → G (¯ G denotes M , C or K , have to be explic- itly constructed. For an analytical approach, this step often requires linearization of the functions. • For Monte-Carlo-Simulation: Re-assembly of the element matrices is re- quired for each sample. 2. Non-parametric modeling: Direct construction of pdf of M , C and K with- out having to determine the uncertain local pa- rameters of a FE model.
Maximum Entropy Principle What is entropy? A measure of uncertainty. For a continuous random variable x ∈ D , Shannon’s Measure of Entropy (1948): � S ( p ( x )) = − D p ( x ) ln p ( x )d x Constraint: � D p ( x )d x = 1 Philosophy of Jayne’s Maximum Entropy Principle (1957): • Speak the truth and nothing but the truth • Make use of all the information that is given and scrupulously avoid making assumptions about information that is not available.
Maximum Entropy Principle Only mean is known: Additional constraint: � D xp ( x )d x = m Construct the Lagrangian as �� � � L = − D p ( x ) ln p ( x )d x − λ 0 D p ( x )d x − 1 �� � − λ 1 D xp ( x )d x − m � = D g ( p ( x ))d x where g ( p ( x )) = − p ( x ) ln p ( x ) − λ 0 p ( x ) − λ 1 xp ( x )+ λ 0 + mλ 1 (3)
Maximum Entropy Principle From the calculus of variation, for δ L = 0 it is re- quired that g ( p ( x )) must satisfy the Euler-Lagrange equation � � ∂g ( p ( x )) ∂g ( p ( x )) − ∂ = 0 (4) ∂p ( x ) ∂x ∂p ( x ) Substituting g ( p ( x )) from (3), equation (4) results − ln p ( x ) − 1 − λ 0 − λ 1 x + λ 1 = 0 p ( x ) = Ae − λ 1 x or That is, exponential distribution. A and λ 1 should be determined from the constraint equations. The analysis can be extended to vector valued random variables and random processes. If mean is unknown then p ( x ) is constant, ie, uni- form distribution. This is also known as the Laplace’s principle of insufficient reason .
Maximum Entropy Principle Mean and standard deviation is known: Additional constraint: � D ( x − m ) 2 p ( x )d x = σ 2 Following previous steps p ( x ) = Ae − λ 1 x − λ 2 x 2 (5) That is, Gaussian distribution.
Soize Model (2000) The probability density function of any system ma- trix (say G ) is defined as N ( R ) ([ G ]) c G (det[ G ]) λ G − 1 p [ G ] ([ G ]) = I M + � � − ( N − 1 + 2 λ G ) × exp Trace( G ) 2 where � N ( N − 1+2 λ G ) / 2 � N − 1 + 2 λ G (2 π ) − N ( N − 1) / 4 2 c G = � � �� ( N − 1 + 2 λ G ) � N l =1 Γ 2 The ‘dispersion’ parameter G ( N − 1) + (Trace[ G ]) 2 � � 1 1 − δ 2 λ G = 2 δ 2 Trace([ G 2 ]) G and � 1 / 2 � E � [ G ] − [ G ] � δ G = � [ G ] � N ( R ) ([ G ]) = 1 if [ G ] ∈ M + N ( R ) otherwise 0. Here I M + M + N ( R ) is the subspace of M N ( R ) constituted of all N × N positive definite symmetric real matrices.
Gaussian Orthogonal Ensembles (GOE) 1. The ensemble (say H ) is invariant under every transformation H → W T HW where W is any orthogonal matrix. 2. The various elements H jk , k ≤ j are statistically independent. 3. Standard deviation of diagonals are twice that of the off-diagonal terms, σ H jj = 2 σ H jk = σ , ∀ j � = k , where σ is some constant. The probability density function � − a Trace( H 2 ) + b Trace( H ) + c � p H ( H ) = exp Probability density function of the eigenvalues of H N − 1 x 2 � � p ( x 1 , x 2 , · · · , x N ) = C N exp | x j − x k | j 2 j =1
GOE in structural dynamics The equations of motion describing free vibration of a linear undamped system in the state-space Ay = 0 where A ∈ R 2 N × 2 N is the system matrix. Trans- forming into the modal coordinates A u = 0 where A ∈ R 2 N × 2 N is a diagonal matrix. Suppose the system is now subjected to n con- straints of the form � � u 1 ( C − I ) = 0 u 2 where C ∈ R n × (2 N − n ) constraint matrix, I is the n × n identity matrix, u 1 and u 1 are partition of u . If the entries of C are independent, then it can be shown (Langley, 2001) that the random part of the system matrix of the constrained system ap- proaches to GOE.
Random Rod Equations of motion: = m ( x ) ∂ 2 U ∂ AE ( x ) ∂U � � (6) ∂t 2 ∂x ∂x Boundary condition: fixed-fixed (U(0)=U(L)=0) m ( x ) = m 0 [1 + ǫ 1 f 1 ( x )] AE ( x ) = AE 0 [1 + ǫ 2 f 2 ( x )] f i ( x ) are zero mean random fields. Deterministic mode shapes: � φ k ( x ) = a sin( kπx/L ) where a = 2 /Lm 0 Consider the mass matrix in the deterministic modal coordinates: � L � L m ′ jk = 0 φ j ( x ) m 0 φ k ( x )d x + ǫ 1 0 φ j ( x ) f 1 ( x ) φ k ( x )d x = m ′ 0 jk + ǫ 1 ∆ m ′ jk The random part � L ∆ m ′ jk = 0 φ j ( x ) f 1 ( x ) φ k ( x )d x < ∆ m ′ jk ∆ m ′ rs > = � L � L 0 φ j ( x 1 ) φ k ( x 1 ) φ r ( x 2 ) φ s ( x 2 ) R f 1 ( x 1 , x 2 )d x 1 d x 2 0
30 25 Mass Matrix 20 15 10 5 0 0 5 10 15 20 25 30
Random Rod Case 1: f 1 ( x ) is δ -correlated (white noise): R f 1 ( x 1 , x 2 ) = Q 1 δ ( x 1 − x 2 ) Results: rr > = 1 • < ∆ m ′ jj ∆ m ′ 4 a 4 Q 1 L , j � = r jj > = 3 • < ∆ m ′ jj ∆ m ′ 8 a 4 Q 1 L kj > = 1 • < ∆ m ′ kj ∆ m ′ 4 a 4 Q 1 L , k � = j • < ∆ m ′ kj ∆ m ′ rs > = 0 • < ∆ m ′ kk ∆ m ′ kr > = 0, k � = r
Random Rod Case 2: f 1 ( x ) is fully correlated: R f 1 ( x 1 , x 2 ) = Q 2 for x 1 , x 2 ∈ [0 , L ] Results: rr > = 1 • < ∆ m ′ jj ∆ m ′ 4 a 4 Q 2 L 2 , j � = r jj > = 3 8 a 4 Q 2 L 2 • < ∆ m ′ jj ∆ m ′ • < ∆ m ′ kj ∆ m ′ kj > = 0, k � = j • < ∆ m ′ kj ∆ m ′ rs > = 0 • < ∆ m ′ kk ∆ m ′ kr > = 0, k � = r
Conclusions and Future Research • Although mathematically optimal given knowl- edge of only the mean values of the matrices, it is not entirely clear how well the results ob- tained from Soize model will match the statis- tical properties of a physical system. • Analytical works show that GOE may be a pos- sible model for the random system matrices in the modal coordinates for very large and com- plex systems. • The random rod analysis has shown that the system matrices in the modal coordinates is close to GOE (but not exactly GOE) rather than the Soize model. • Future research will address more complicated systems and explore the possibility of using GOE (or close to that, due to non-negative definite- ness) as a model of the random system matri- ces. Such a model would enable us to develop a general Monte-Carlo simulation technique to be used in conjunction with FE methods.
Recommend
More recommend