Join Conrad Carlberg for an in-depth discussion in this video Combine many empirical findings, part of Meta-analysis for Data Science and Business Analytics.
- [Narrator] Back in the 1970s a statistician named Gene Glass became angry at the writings in professional journals by a psychologist named Hans Eysenck. Eysenck claimed that psychotherapy was merely a placebo. Although Glass had personal reasons for his disagreement with Eysenck, he also had reasons that were grounded in rigorous, statistical analysis, sound experimental design and systematic, as distinct from arbitrary, selection of evidence. Glass believed that Eysenck had cherry picked studies of psychotherapy to make the ones he agreed with look authoritative and the ones he disagreed with look simpleminded.
In doing so he made several decisions that were at best questionable and at worst outrageous. Let's get into what I mean by that. Eysenck first eliminated from consideration all studies that did not appear in peer-reviewed journals. Thus, he discarded all the meaningful, carefully conducted research on psychotherapy that was conducted and submitted in support of PhD degrees as well as research that for various reasons was not submitted to journals for publication. Then Eysenck discarded from the remaining studies any that did not employ an untreated control group.
There is no methodological reason for this and it would eliminate any studies, such as medical research, that contrasted. For example, the effect of Aspirin or the effect of Tylenol on headaches. Eysenck ignored reports that did not pass the criterion of statistical significance at the .05 level even though such findings are a critical part of the overall evidence regarding any given treatment. Some of the remaining research on psychotherapy reported a significant result on one outcome measure but a nonsignificant result on a different outcome measure.
Eysenck discarded such studies because the results were quote inconsistent end of quote. Having discarded the great majority of the available studies of the effects of psychotherapy, Eysenck was left with 11 studies that supposedly summarized it's benefits. So based on this lack of sufficient evidence his final judgment was that psychotherapy is worthless. All this cherry picking exasperated Glass. Irritation was partly personal. Glass believed that psychotherapy is indeed beneficial but it was also professional.
Just a few years earlier, Glass had published a best-selling text book on statistical analysis and experimental methodology. It annoyed him to see numeric methods grounded in mathematical and probability theory and carefully worked out over centuries, being misrepresented. So Glass decided to conduct an objective, quantitative review of empirical research on the effects of psychotherapy. He laid out some ground rules, very different from those followed by Eysenck. He would retain all studies that contrasted psychotherapy or a specific type of psychotherapy with a comparison group.
However, he would also assign scores to each study. Scores that measured such variables as the degree of internal validity of the experimental design, the author's apparent degree of allegiance to the treatment being studied, how well the samples of subjects might represent the populations from which they were taken and so on. He would focus on the size of the effect reported at the end of the experiment. Hence the term effect size. This is a very different focus from a p-value. A reported p-value is intended to express the likelihood of getting a difference between group means as large as the one that was observed in the experiment.
When there is no difference in the populations from which the samples were taken. An effect size, in contrast, expresses the size of the difference between the mean of the treated group and the mean of a comparison group. I'll focus on effect sizes in the next lesson.
- Rationale for meta-analysis
- Straightforward effect sizes
- Standardized mean differences
- Correlation coefficients
- Complex effect sizes: Risk ratios and odds ratios
- Confidence intervals in meta-analysis
- Building confidence intervals around binary-outcome effect sizes