properties of estimators of doubly exchangeable
play

Properties of estimators of doubly exchangeable covariance matrix - PowerPoint PPT Presentation

Properties of estimators of doubly exchangeable covariance matrix and structured mean vector for three-level multivariate observations Arkadiusz Kozio l XLII Konferencja Statystyka Matematyczna 27.11.2016 - 02.12.2016, B edlewo 1


  1. Properties of estimators of doubly exchangeable covariance matrix and structured mean vector for three-level multivariate observations Arkadiusz Kozio� l XLII Konferencja ”Statystyka Matematyczna” 27.11.2016 - 02.12.2016, B¸ edlewo 1

  2. 1. Three-level multivariate data - Introduction Multi-level multivariate observations are becoming increasingly visible across all fields of biomedical , medical and engineering among many others these days. Example of three-level multivariate observations: Investigator measured of intraocular pressure (IOP) and central corneal thickness (CCT) which are obtained from both the eyes (sites), each at three time points at an interval of three months for 30 patients. It is clear that for this data set m = 2, u = 2 and v = 3. 2

  3. 2. Doubly exchangeable covariance structure in three-level multivariate data The assumption of double exchangeability reduces the number of un- known parameters considerably, thus allows more dependable or reliable parameter estimates. The unstructured variance-covariance matrix has vum ( vum + 1) / 2 unknown parameters, which can be large for arbitrary values of m , v or u . In order to reduce the number of unknown parameters, it is then essential to assume some appropriate structure on the variance-covariance matrix. One may assume a DE covariance structure in this situation, where the data is multivariate in three levels. DE covariance structure has only 3 m ( m +1) / 2 unknown param- eters. 3

  4. 2.1. Doubly exchangeable covariance structure in three-level multivariate data - form The ( vum × vum ) − dimensional DE covariance structure is defined as   . . . Σ 0 Σ 1 Σ 1 . ... .   . . . .   Γ =   . . ...  . .  . . . . . Σ 1 Σ 1 Σ 0 = I v ⊗ ( Σ 0 − Σ 1 ) + J v ⊗ Σ 1 , where I v is the v × v identity matrix, 1 v is a v × 1 vector of ones, J v = 1 v 1 ′ v . 4

  5. 2.1. Doubly exchangeable covariance structure in three-level multivariate data - form In (1) matrices Σ 0 and Σ 1 have the following form: = I u ⊗ ( Γ 0 − Γ 1 ) + J u ⊗ Γ 1 , and Σ 0 Σ 1 = J u ⊗ Γ 2 . This covariance structure can equivalently be written in terms of Γ 0 , Γ 1 and Γ 2 as Γ = I v ⊗ I u ⊗ Γ 0 + I v ⊗ ( J u − I u ) ⊗ Γ 1 + ( J v − I v ) ⊗ J u ⊗ Γ 2 . 5

  6. 2.2. Doubly exchangeable covariance structure in three-level multivariate data - assumptions 1. Γ 0 is a positive definite symmetric m × m matrix, 2. Γ 1 and Γ 2 are a symmetric m × m matrices, 3. Γ 0 − Γ 1 is positive definite matrix, 4. Γ 0 + ( u − 1) Γ 1 − u Γ 2 is positive definite matrix, 5. Γ 0 + ( u − 1) Γ 1 + ( v − 1) u Γ 2 is positive definite matrix, � so that the vum × vum matrix Γ is positive definite for a proof, see Lemma � 2.1 in Roy and Leiva (2011) . 6

  7. 3. Model with structured mean vector - notation Let y r,ts be a m -variate vector of measurements on the r th individual at the t th time point and at the s th site; r = 1 , . . . , n, t = 1 , . . . , v, s = 1 , . . . , u. The n individuals are all independent. r,vu ) ′ be the vum -variate vector of all measurements Let y r = ( y ′ r, 11 , . . . , y ′ corresponding to the r th individual. Finally, let y 1 , y 2 , . . . , y n be a random sample of size n drawn from the popu- lation N vum ( 1 vu ⊗ µ , Γ ), where µ ∈ R m and Γ is assumed to be a vum × vum positive definite matrix. 7

  8. 3.1. Model with structured mean vector - assumption In this model we assume that covariance structure is DE and mean vector has a following structure: 1 nvu ⊗ µ where µ has m components. � � nvum × 1 = vec ( Y ′ y vum × n ) ∼ N ( 1 nvu ⊗ I m ) µ , I n ⊗ Γ vum . This means that n independent random column vectors are identically dis- tributed ( vum × vum ) − dimensional variance covariance matrix. 8

  9. 3.2. Orthogonal projector on the subspace of mean vector It will be noted by P and used to show that if I n ⊗ I vum ∈ ϑ = sp { V } , where V = I n ⊗ Γ , it follows that P y is the best linear unbiased estimator (BLUE) if and only if P commutes with all covariance matrix V . Orthogonal projectors on the subspace of mean vector for model with struc- tured mean vector: P = 1 n J n ⊗ 1 v J v ⊗ 1 u J u ⊗ I m , where J n = 1 n 1 ′ n , J v = 1 v 1 ′ v and J u = 1 u 1 ′ u are matrices of ones. 9

  10. 3.3. Orthogonal projector on the subspace of mean vector Result 1. The projection matrix P commutes with the covariance matrix V , i.e., P V = V P , where V = I n ⊗ Γ , the covariance matrix of y . Lemma. Let ϑ denote the subspace spanned by V , i.e., ϑ = sp { V } . Then, ϑ is a quadratic subspace, meaning that ϑ is a linear space and if V ∈ ϑ then � � V 2 ∈ ϑ see Seely (1971) for the definition . 10

  11. 3.4. BLUE for µ Because for considered model orthogonal projector on the space generated by the mean vector commutes with all covariances matrices, there exists BLUE for each estimable function of mean. Moreover BLUE are least squares es- timators (LSE), in view of Result 1. Thus, � µ is the unique solution of the following normal equation: ( 1 nvu ⊗ I m ) ′ ( 1 nvu ⊗ I m ) µ = ( 1 nvu ⊗ I m ) ′ y or nvu I m µ = [ I m , I m , . . . , I m ] y , which means that: � n � v � u 1 µ = y r,ts . � nvu r =1 t =1 s =1 11

  12. 3.5. Base for the quadratic subspace ϑ We define A ii = E ii and A ij = E ij + E ji , for i < j ; and j = 1 , . . . , m, as a base for symmetric matrices Γ . The ( m × m ) − dimensional matrices E ij has 1 only at the ij th element, and 0 at all other elements. Then it is clear that the base for diagonal matrices of the form I n ⊗ I v ⊗ I u ⊗ Γ 0 is constituted by matrices K (0) ij = I n ⊗ I v ⊗ I u ⊗ A ij , for i ≤ j, j = 1 , . . . , m, the base for matrices of the form I n ⊗ I v ⊗ ( J u − I u ) ⊗ Γ 1 is constituted by matrices K (1) ij = I n ⊗ I v ⊗ ( J u − I u ) ⊗ A ij , for i ≤ j, j = 1 , . . . , m 12

  13. 3.5. Base for the quadratic subspace ϑ and the base for matrices of the form I n ⊗ ( J v − I v ) ⊗ J u ⊗ Γ 2 is constituted by matrices K (2) ij = I n ⊗ ( J v − I v ) ⊗ J u ⊗ A ij , for i ≤ j, j = 1 , . . . , m. 13

  14. 3.6. The complete and minimal sufficient statistics for family of normal distributions Let M = I n ⊗ I v ⊗ I u ⊗ I m − P . So, M is idempotent. Now, since P V = V P , and ϑ is a quadratic space, MϑM = Mϑ is also a quadratic space. Result 2. The complete and minimal sufficient statistics for the mean vector and the variance-covariance matrix are: ( 1 ′ nvu ⊗ I m ) y y ′ MK ( l ) ij My , l = 0 , 1 , 2 , i ≤ j, i, j = 1 , . . . , m, 14

  15. 3.7. BQUE for parameters of covariance matrix Since P commutes with the covariance matrix of y , for each parameter of covariance there exists BQUE if and only if sp { MV M } , is a quadratic subspace (see Zmy´ slony (1976, 1980) and Gnot et al. (1976, 1977a,c)) or Jordan algebra (see Jordan et al. (1934)), where V stands for covariance matrix of y . It is clear that if sp { V } is a quadratic subspace and if for each Σ ∈ sp { V } commutativity P Σ = Σ P holds, then sp { MV M } = sp { MV } is also a quadratic subspace. According to the coordinate free approach, the expectation of Myy ′ M can be written as a linear combination of matrices MK (0) ij , MK (1) ij and MK (2) ij with unknown coefficients σ (0) ij , σ (1) ij and σ (2) ij , respectively. 15

  16. 3.7. BQUE for parameters of covariance matrix column vectors σ ( l ) = [ σ ( l ) Defining m ( m +1) ij ] for i ≤ j = 1 , . . . , m ; l = 0 , 1 , 2 , 2 we see that the normal equations have the following block diagonal structure:         σ (0) r (0) a b c  ⊗ I m ( m +1)  =     σ (1)  r (1)  b d e , (1) 2 σ (2) r (2) c e f � ij ) 2 � � � M ( K (0) MK (0) ij K (1) where for i ≤ j = 1 , . . . , m ; a = tr , b = tr , ij � � � ij ) 2 � � � MK (0) ij K (2) M ( K (1) MK (1) ij K (2) c = tr , d = tr , e = tr and ij ij � ij ) 2 � � � × 1 vector r ( l ) = M ( K (2) m ( m +1) r ′ K ( l ) 1 f = tr , while the ij r for 2 2 − δ ij l = 0 , 1 , 2, δ ij is the Kronecker delta and r stands for the residual vector, i.e., r = My = ( I nvum − P ) y . 16

  17. 3.7. BQUE for parameters of covariance matrix Let C 0 , C 1 and C 2 be defined as follows: v u n � � � � � � � ′ , C 0 = y r,ts − � µ y r,ts − � µ t =1 s =1 r =1 v u u n � � � � � � � � ′ , C 1 = y r,ts ∗ − � µ y r,ts − � µ t =1 s =1 s ∗ =1 r =1 s � = s ∗ v v u u n � � � � � � � � � ′ , C 2 = y r,t ∗ s ∗ − � µ y r,ts − � µ t =1 t ∗ =1 s =1 s ∗ =1 r =1 t � = t ∗ � n � v � u 1 where � µ = s =1 y r,ts . r =1 t =1 nvu 17

  18. 3.7. BQUE for parameters of covariance matrix Now the right hand side of equation (1) can be expressed by C 0 , C 1 and C 2 respectively, and then we have:         a b c Γ 0 C 0    ⊗ I m    =   . b d e Γ 1 C 1 c e f Γ 2 C 2 Solving this equation we get         ( n − 1) vu +1 1 1 Γ 0 C 0 n ( n − 1) v 2 u 2 n ( n − 1) v 2 u 2 n ( n − 1) v 2 u 2       = ( n − 1) vu + u − 1   . 1 1  ⊗ I m Γ 1 C 1    n ( n − 1) v 2 u 2 n ( n − 1) v 2 u 2 ( u − 1) n ( n − 1) v 2 u 2 Γ 2 C 2 1 1 nv − 1 n ( n − 1) v 2 u 2 n ( n − 1) v 2 u 2 n ( n − 1) v 2 ( v − 1) u 2 18

Recommend


More recommend