meta analysis of paired comparison studies of diagnostic
play

Meta-Analysis of Paired-Comparison Studies of Diagnostic Test Data: - PDF document

Introduction Indirect evidence Bayesian modeling Applications Summary References Meta-Analysis of Paired-Comparison Studies of Diagnostic Test Data: A Bayesian Modeling Approach Pablo E. Verde pabloemilio.verde@uni-duesseldorf.de


  1. Introduction Indirect evidence Bayesian modeling Applications Summary References Meta-Analysis of Paired-Comparison Studies of Diagnostic Test Data: A Bayesian Modeling Approach Pablo E. Verde pabloemilio.verde@uni-duesseldorf.de Coordination Center for Clinical Trials University of Duesseldorf Germany Thursday 23 of May 2013 BAYES 2013 Rotterdam Introduction Indirect evidence Bayesian modeling Applications Summary References Comparison of Medical Diagnostic Technologies Paired-comparison diagnostic studies • Two or more diagnostic tests are applied to the same group of patients • Assess diagnostic performance between tests • Compare pros and cons (e.g. invasive procedures versus noninvasive ones) Issues in Meta-Analysis • Correlated outcomes within and across studies • Imperfect evidence, e.g. relevant data are not reported • Common practice: use simple techniques and ignore problems 2 / 19

  2. Introduction Indirect evidence Bayesian modeling Applications Summary References Running example RAPT (Review of abdominal pain tools, Liu et al. 2006) • Diagnosis of acute abdominal pain • Test 1: doctors using common medical practice (UD) • Test 2: doctors aided by decision tools (DT) • Decision tools are: classification statistical models (logistic regression, neural networks, naive Bayesian, etc.). • N=9 studies reported paired-comparison between DT and UD Results of Liu et al. 2006 • No difference in sensitivity between DT and UD • The specificity of DT is better than the specificity of UD 3 / 19 Introduction Indirect evidence Bayesian modeling Applications Summary References Pieces of evidence of diagnostic test accuracy Test results of the study i ( i = 1 , . . . , N ) are summarized in two 2 × 2 tables: Results for Test 1 Patient status With disease Without disease Test 1 + tp i , 1 fp i , 1 outcome - fn i , 1 tn i , 1 Sum: n i , 1 n i , 2 Results for Test 2 Patient status With disease Without disease Test 2 + tp i , 2 fp i , 2 outcome - fn i , 2 tn i , 2 Sum: n i , 1 n i , 2 4 / 19

  3. Introduction Indirect evidence Bayesian modeling Applications Summary References Review of abdominal pain tools (Liu et al. 2006) RAPT RAPT 1.0 1.0 ● 45° line 45° line ● Regression line Regression line 0.8 0.8 ● ● FPR (1−Specificity): Dr+Tools TPR (Sensitivity): Dr+Tools 0.6 0.6 ● 0.4 0.4 0.2 0.2 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 TPR (Sensitivity): Doctors FPR (1−Specificity): Doctors Figure : RAPT: Diagnostic of acute abdominal pain. Doctors aided by decision tools (DT) vs. unaided doctors (UD). Left panel: TPRs DT vs UD. Right panel: FPRs DT vs UD. (N=9) 5 / 19 Introduction Indirect evidence Bayesian modeling Applications Summary References Partially observed tables: indirect pieces of evidence Patients status: with disease Test 2 outcome + - Test 1 + y i , 1 tp i , 1 − y i , 1 tp i , 1 outcome - y i , 2 fn i , 1 − y i , 2 fn i , 1 Sum: tp i , 2 fn i , 2 n i , 1 Patients status: without disease Test 2 outcome + - Test 1 + y i , 3 fp i , 1 − y i , 3 fp i , 1 outcome - y i , 4 tn i , 1 − y i , 4 tn i , 1 Sum: fp i , 2 tn i , 2 n i , 2 Table : The marginals are fixed and y i , 1 y i , 2 y i , 3 and y i , 4 are unobserved. 6 / 19

  4. Introduction Indirect evidence Bayesian modeling Applications Summary References Accounting Lemma of Partial Observed Tables Unobserved rates: • p i , 1 = Pr ( y i , 1 = 1 | Test 1 tp) and p i , 2 = Pr ( y i , 2 = 1 | Test 1 fn) • p i , 3 = Pr ( y i , 3 = 1 | Test 1 fp) and p i , 4 = Pr ( y i , 4 = 1 | Test 1 tn) Lemma The accounting relationships between the observed and unobserved diagnostic rates are given by: TPR i , 2 = p i , 1 � � TPR i , 1 + p i , 2 (1 − � TPR i , 1 ) (1) and FPR i , 2 = p i , 3 � � FPR i , 1 + p i , 4 (1 − � FPR i , 1 ) (2) 7 / 19 Introduction Indirect evidence Bayesian modeling Applications Summary References Some remarks • Equations (1) and (2) are undetermined with four unknowns • Unexpected solutions are possible (e.g p i , 1 = p i , 2 ) • They impose a deterministic data truncation constrains • To display indirect evidence of the p ′ s we can plot: � � TPR i , 2 TPR i , 1 p i , 2 = − p i , 1 , (3) 1 − � 1 − � TPR i , 1 TPR i , 1 and � � FPR i , 2 FPR i , 1 p i , 4 = − p i , 3 . (4) 1 − � 1 − � FPR i , 1 FPR i , 1 8 / 19

  5. Introduction Indirect evidence Bayesian modeling Applications Summary References Displaying indirect evidence: RAPT RAPT: TPR RAPT: FPR 1.0 1.0 0.8 0.8 0.6 0.6 p 2 p 4 0.4 0.4 0.2 0.2 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 p 1 p 3 9 / 19 Introduction Indirect evidence Bayesian modeling Applications Summary References The ecological fallacy of two diagnostic tests Ignoring these data structures may end in an ecological fallacy Ecological Fallacy Ecological Fallacy 1.0 1.0 ● ● 0.8 0.8 ● ● TPR (Sensitivity) Test 2 TPR (Sensitivity) Test 2 ● ● 0.6 0.6 ● ● ● ● 0.4 0.4 ● ● 0.2 0.2 0.0 0.0 0.5 0.6 0.7 0.8 0.9 1.0 0.5 0.6 0.7 0.8 0.9 1.0 TPR (Sensitivity) Test 1 TPR (Sensitivity) Test 1 Correct Answer True Model 1.0 1.0 ● ● ● ● 0.8 0.8 ● ● ● ● TPR (Sensitivity) Test 2 TPR (Sensitivity) Test 2 ● ● ● ● 0.6 0.6 ● ● ● ● ● ● ● ● 0.4 0.4 ● ● ● ● 0.2 0.2 0.0 0.0 0.5 0.6 0.7 0.8 0.9 1.0 0.5 0.6 0.7 0.8 0.9 1.0 TPR (Sensitivity) Test 1 TPR (Sensitivity) Test 1 10 / 19

  6. Introduction Indirect evidence Bayesian modeling Applications Summary References Learning from evidence at face value • Data on true positive results : ( tp i , 1 , tp i , 2 , n i , 1 ) • The unobserved data are modeled as: y i , 1 | tp i , 1 ∼ Binomial ( p i , 1 , tp i , 1 ) (5) y i , 2 | fn i , 1 ∼ Binomial ( p i , 2 , fn i , 1 ) (6) • Then tp i , 2 = y i , 1 + y i , 2 follows a convolution of these two binomial distributions with likelihood contribution: � tp i , 1 �� � min( tp i , 1 , tp i , 2 ) � fn i , 1 i , 1 (1 − p i , 1 ) ( tp i , 1 − k ) p k L i , tp = k tp i , 2 − k k =max(0 , tp i , 2 − fn i , 1 ) × p tp i , 2 i , 2 (1 − p i , 2 ) tp i , 1 − tp i , 2 + k , • The false positive tables ( fp i , 1 , fp i , 2 , n i , 2 ) are modeled in similar way with likelihood contributions L i , fp 11 / 19 Introduction Indirect evidence Bayesian modeling Applications Summary References Combining multiple sources of evidence Study effects We model the variability between studies with a scale mixture of normal distributions (Verde, 2010): g ( p i , j ) = θ i , j ∼ N ( µ j , w i λ j ) (7) w i ∼ Γ( ν/ 2 , ν/ 2) , (8) for i = 1 , . . . , N and j = 1 , . . . , 4, where g ( · ) is a link function, λ j are precision parameters and w i mixture weights. Between populations correlation We model the correlation between disease and non-disease populations by cor ( θ i , 1 , θ i , 3 ) = ρ. (9) 12 / 19

  7. Introduction Indirect evidence Bayesian modeling Applications Summary References Interpretation of the mixture weights We use the posterior distribution of w i to identify studies with unusual heterogeneity • A-priory all studies included have a mean of E ( w i ) = 1 • Studies which are unusual heterogeneous will have posteriors with values substantially less than 1, say w i < 0 . 7 • Clearly if all w i ≈ 1 a multivariate Normal is an appropriate model • If some w i are lower than 1 then the effect of these studies will be down weighted resulting in a robust inferential method 13 / 19 Introduction Indirect evidence Bayesian modeling Applications Summary References Further modeling details Hyper-parameters priors We use independent and weakly informative priors for hyper-parameters: µ j ∼ N (0 , . 1) , λ j ∼ Γ(1 , 0 . 1) , (10) ν ∼ Exp (1) , logit (( ρ + 1) / 2) ∼ N (0 , 1) . (11) Remarks in computations • L i , tp and L i , fp are approximated by normal likelihoods (Wakefield 2004) • Statistical computations are implemented in BUGS and R • Most of the stochastic nodes in the model use conditional conjugate, so Gibbs sampling is straightforward 14 / 19

Recommend


More recommend