Debunking Junk Science: Techniques for Effective Use of Biostatistics Numbers and statistical jargon may make jurors’ eyes glaze over, but defense counsel must be alert to show the errors of plaintiffs’ experts By Bruce R. Parker and Anthony F. Vittoria IADC member Bruce Parker is a part- ner in the Baltimore firm of Goodell, DeVries, Leech & Gray, LLP, were his DEFENSE counsel can attack junk science practice is concentrated in the areas of through the effective use of biostatistical products liability and drug and medical evidence. It can be used against plaintiffs’ device litigation. He is a graduate of experts both in cross-examination and in Johns Hopkins University (1975) and the using defense experts to explain why plain- Columbus School of law of Catholic Uni- tiffs’ theories are incorrect. This article versity of America (1978). will focus primarily on how to use statisti- Anthony F. Vittoria, an associate in the cal evidence to cross-examine plaintiffs’ same firm, is a graduate of the University experts effectively. of Virginia (B.A. 1991, J.D. 1996) and Biostatistical analysis is, like other disci- holds an M.A. degree from the College of William and Mary (1993). plines, shrouded in jargon that is hard to This article is derived from material cut through. Effectively using biostatistical Mr. Parker prepared for a Defense Re- data 1 requires cutting through the jargon search Institute seminar. and understanding the statistical concepts. The first sections of this article discuss statistical concepts. 2 There is concentration on experimental design, since statistical support their opinions and the process by data is no better than the study that pro- which researchers statistically analyze data duced it, and there is focus on factors that to determine whether the experiment pro- duced a “significant” result. 4 Last, there are can negatively affect the results of an ex- periment and how scientists attempt to examples of how experts and attorneys “control” for these factors. 3 Next is a mislead juries and courts with statistical primer on statistical analysis. It explains testimony. Strategies are offered for effec- many of the statistical concepts discussed tively cross-examining an expert who re- in medical literature and used by experts to lies upon erroneous statistical data. 1. The term “statistical data” is a misnomer. For ables that are not the object of the study. This is done simplicity, as used in this article, it simply means by altering the design of the study to eliminate or raw data that have been statistically analyzed for reduce the effect of the “confounding” variable. See purposes of determining whether the data are statisti- David H. Kaye & David A. Freedman, Reference cally significant. Guide on Statistics ” in R EFERENCE M ANUAL ON S CI - 2. Some of the statistical concepts discussed in ENTIFIC E VIDENCE 351, n.56 (Federal Judicial Cen- this paper were addressed in the particular context of ter, 1994). epidemiology in B RUCE R. P ARKER , Understanding 4. In statistics, the term “significant” has a mean- Epidemiology and Its Use in Drug and Medical De- ing other than “important” or “noteworthy.” To re- vice Litigation , 65 D EF . C OUNS . J. 35 (1998). searchers, “significance” refers to whether a study 3. In experimental design, the term “control” has has indicated the “presence” of an association, and a meaning other than actual manipulation. “Control- not its magnitude or importance. Richard Lempert, ling”—whether it be a “bias,” “factor” or a “vari- Statistics in the Courtroom , 85 C OLUM . L. R EV . able”—refers to the process by which researchers at- 1098, 1101 (1985). tempt to minimize the effect on the study of vari-
Page 34 DEFENSE COUNSEL JOURNAL —January 1999 STUDY DESIGN FACTORS the group sampled is a reflection of its de- gree of internal validity. To the extent the A. Research Design data can be generalized, they have external One of the goals of researchers is to de- validity. A study that has high internal va- termine whether relationships exist be- lidity, but is nevertheless not generalizable, can be misleading. 7 tween or among variables. They achieve their goal by designing experiments and The concepts of validity and reliability accurately recording the data from the ex- are interrelated. A researcher can have an periment. Counsel must review scientific experimental design that produces reliable, literature and expert testimony based on but invalid results—that is, the scale al- experimental (either laboratory or clinical) ways reports that you weigh 175 pounds, data to consider whether the article or testi- when you in fact weigh 180—but you can- not have valid results that are not reliable. 8 mony is flawed by poor study design. Pointing out errors in study design is an excellent way to challenge expert testi- 3. Sensitivity mony under Daubert 5 and at trial. The sensitivity of a test refers to the per- centage of times that the test correctly 1. Reliability gives a positive result when the individual Reliability is similar to the concept of tested actually has the characteristic or trait reproducibility. It refers to how well the in question. For example, the sensitivity of research design produces results that are a test that is designed to determine high red the same, or very similar, each time the cell counts is the percentage of people who data are collected. An easy way to think of have high red cell levels and who test posi- reliability is to consider a scale. A “reli- tive. able” scale will report “the same weight for When the test correctly reports that a the same object time and again.” 6 This does person has high red cell counts, the result not mean that the scale is accurate—it may is a true positive. Conversely, when the test always report a weight that is too high or reports that a person does not have high red too low—but it always makes the same counts when, in fact, that person does, the error each time. result is a false negative. The numerical value of a test’s sensitivity is obtained by 2. Validity dividing the number of true positives by the total of true positives and false nega- Validity is synonymous with accuracy, tives in the sample. 9 and it has internal and external compo- nents. Whether the data properly measure 4. Specificity The specificity of a test refers to the per- 5. Daubert v, Merrell Dow Pharmaceuticals centage of times a test correctly reports Inc., 509 U.S. 579 (1993). that a person does not have the characteris- 6. Kaye & Freedman, supra note 3, at 341. tic under investigation. When a test shows 7. R OBERT H. F LETCHER , S UZANNE W. F LETCHER & E DWARD H. W AGNER , C LINICAL E PI - that a person who has a normal red cell DEMIOLOGY 22 (3d ed. 1996). count is negative, the result is a true nega- 8. Kaye & Freedman, supra note 3, 342. tive. A false positive result occurs when 9. L EON G ORDIS , E PIDEMIOLOGY 58 (1996). The formula for sensitivity is: Sensitivity = TP/(TP + the test incorrectly reports a high red cell FN) where TP is the number of true positives in the count, when in fact that person is normal. sample and FN is the number of false negatives in Specificity is determined by dividing the the sample. Id . at 60. 10. Id. The formula for specificity is: Specificity number of true negatives by the total of = TN/(TN + FP) where TN is the number of true- true negative plus false positive respond- negatives in the sample and FP is the number of ers. 10 false-positives in the sample.
Recommend
More recommend