You must turn off your ad blocker to use Psych Web; however, we are taking pains to keep advertising minimal and unobtrusive (one ad at the top of each page) so interference to your reading should be minimal.




If you need instructions for turning off common ad-blocking programs, click here.

If you already know how to turn off your ad blocker, just hit the refresh icon or F5 after you do it, to see the page.

Psi man mascot

Predictions Based on the Actuarial Method

Predictions based on correlations found in large bodies of data can be more accurate than judgments by experts. This insight was championed by Paul Meehl, a psychologist with a career spanning 59 years a the University of Minnesota (from 1944 until his death in 2003).

Meehl used the term actuarial, most commonly heard in connection with actuarial tables in financial industries. Actuarial tables show interest rate compounding or calculations for life insurance risk.

Meehl's "actuarial method" was to extract correlations from large bodies of data. Today that is called data mining (and large collections of data are called big data).

In the influential book Clinical Versus Statistical Prediction (1954), Meehl showed statistical techniques could do as well as trained experts in a variety of situations. For example, correlations were better than experts at predicting whether or not a client in psychotherapy would relapse.

What did Meehl show in his influential 1954 book? What is the actuarial method?

The same thing proved to be true in a variety of other situations. Dawes, Faust, and Meehl 1993) gave a list of studies in which statistical prediction outperformed experts using the same data set (for reference information, see Dawes, Faust, and Meehl, 1993).

Academic success (Dawes, 1971; Schofield & Garrard, 1975; Wiggins & Cohen, 1971)

Business bankruptcy (Beaver's, 1966, and Deacon's, 1972, models compared to Libby's, 1976, experts)

Longevity (Einhorn, 1972

Military training success (Bloom & Brundage, 1947)

Myocardial infarction (Goldman et al., 1988; Lee et al., 1986)

Neuropsychological diagnosis (Leli & Filskov, 1984; Wedding, 1983)

Parole violation (J. Carroll et al., 1982; Gottfredson, Wilkins, & Hoffman, 1978)

Police termination (Inwald, 1988)

Psychiatric diagnosis (Goldberg, 1965)

Violence (Miller & Morris, 1988; Werner, Rose, & Yesavage, 1983)

In one typical study, Szucko and Kleinmuntz (1981) showed statistics outperformed expert judges in evaluating polygraph or "lie detector" records. Within a few years, however, Kleinmuntz suggested perhaps a combination of expert judgment and statistics was best.

Meehl disagreed. Meehl and colleagues collected data showing that when a lie detector expert contradicted predictions based on correlational methods, the expert was usually wrong (Dawes, Faust, and Meehl, 1989). Kierkegaard (2016) arrived at the same conclusion.

What did research show about lie detector experts?

Computers easily search large data sets for unexpected correlations. The amount of data available now is much greater than it was in Meehl's day, making predictions more accurate where prediction is possible.

One caution: not all systems lend themselves to prediction. Chaos theory makes that clear. For example, weather forecasts are not very accurate more than seven days in advance because of the amount of chaos in weather systems.

In systems that lend themselves to prediction, a larger dataset (based on past behavior of the system) will result in better predictions. Already, during Meehl's time, computers outperformed human experts nearly every time the two were compared. Therefore, the basic insight Meehl discovered (that computers can out-perform human experts) will become truer with time, as amounts and quality of data grow.

That is not necessarily good for experts. Cohen and Hannigan (1978) told how Cohen was hired to predict who would succeed in a small drug treatment program. Time, money, and energy was wasted on people who dropped out before the program was complete.

Nobody could tell which patients would succeed. This was despite "approximately 60 psychological scores, scales, measures, and quotients" crammed into each patient's folder.

After months of struggling with complex clinical data, Cohen got fed up. She fed all the information she had into a computer, telling it (in effect) "find anything that correlates with success in the program."

To her surprise, the computer quickly located a few variables that predicted success in the program with amazing accuracy. She had accomplished her goal!

How did Cohen find a way to make incredibly accurate predictions?

Now the clinic could quickly predict which clients would stick with the program. So they fired her. Her expertise was no longer needed. Cohen and Hannigan noted, "Every silver lining has a cloud."

---------------------
References:

Cohen, A. & Hannigan, P. (1978) American Psychologist, 33, 1144-1145.

Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical versus actuarial judgment. Science, 243, 1668-1674.

Dawes, R. M., Faust, D., & Meehl, P. E. (1993) Statistical prediction versus clinical prediction: Improving what works. In G. Keren & C. Lewis (Eds.), A Handbook for Data Analysis in the Behavioral Sciences: Methodological issues (pp. 351-367). Hillsdale, NJ: Lawrence Erlbaum.

Kirkegaard, E. O. W. (2015, July 5) Clinical vs. statistical prediction. Clear Language, Clear Mind [blog] Retrieved from: https://emilkirkegaard.dk/en/?p=6085 .

Meehl, P. E. Clinical versus Statistical Prediction. Minneapolis: University of Minnesota Press.

Szucko, J. J. & Kleinmuntz, B. (1981) Statistical versus clinical lie detection. American Psychologist, 36, 488-496.


Write to Dr. Dewey at psywww@gmail.com.


Don't see what you need? Psych Web has over 1,000 pages, so it may be elsewhere on the site. Do a site-specific Google search using the box below.