Junk Science, Test Validity, and the Uniform Guidelines for Personnel Selection Procedures: The Case of Melendez v. Illinois Bell
Fred B. Bryant, Ph.D. and Elaine K.B. Siegel
Loyola University Chicago and Hager & Siegel, P.C.
This paper stems from a recent federal court case in which a standardized test of cognitive ability developed by AT&T, the Basic Scholastic Aptitude Test (BSAT), was ruled invalid and discriminatory for use in hiring Latinos. Within the context of the BSAT, we discuss spurious statistical arguments advanced by the defense, exploiting certain language in the current Uniform Guidelines for evaluating the fairness and validity of personnel selection tests. These issues include: (a) how to avoid capitalizing on chance; (b) what constitutes “a measure” of job performance; (c) how to judge the meaningfulness of group differences in performance measures; and (d) how to combine data from different sex, race, or ethnic subgroups when computing validity coefficients for the pooled, total sample. Pursuant to the Uniform Guidelines’ standard for unfairness, when one ethnic group scores higher on an employment test, the test is deemed “unfair” if this difference is not reflected in a measure of job performance. Although studies validating selection instruments often survive the unfairness test, such data are vulnerable to bias and manipulation, if appropriate statistical procedures are not used. We consider both the benefits (greater clarity and precision) and the potential costs (loss of legal precedent) of revising the Uniform Guidelines to address these issues. We further discuss legal procedures to limit “junk science” in the courtroom, and the need to reevaluate validity generalization in light of Simpson’s “false correlation” paradox.