Effect of Sample Size on Discovery of Relationships in Random Data by Classification Algorithms

Ariel Linden & Paul R. Yarnold

Linden Consulting Group, LLC & Optimal Data Analysis, LLC

In a recent paper, we assessed the ability of several classification algorithms (logistic regression, random forests, boosted regression, support vector machines, and classification tree analysis [CTA]) to correctly not identify a relationship between the dependent variable and ten covariates generated completely at random. Only classification tree analysis correctly observed that no relationship existed. In this study, we examine whether various randomly derived subsets of the original N=1000 dataset change the ability of these models to correctly observe that no relationship exists. The randomly drawn samples were 250 and 500 observations. We further test the hold-out validity of these models by applying the generated model’s logic onto the remaining sample and computing the area under the receiver operator’s characteristics curve (AUC). Our results indicate that limiting the sample size has no effect on whether classification algorithms correctly determine that a relationship does not exist between variables in randomly generated data. Only CTA consistently identified that the data were random.

View journal article

ODA vs. χ2, r, and τ: Trauma Exposure in Childhood and Duration of Participation in Eating-Disorder Treatment Program

Paul R. Yarnold

Optimal Data Analysis, LLC

This note illustrates the disorder and confusion attributable to analytic ethos whereby a smorgasbord of different statistical tests are used to test identical or parallel statistical hypotheses. Herein four classic methods are used for an application with a binary class (dependent) variable and an ordered attribute (independent variable) measured using a five-point scale. Legacy methods reach different conclusions—which is correct? In absolute contrast, for a given sample and hypothesis novometric analysis identifies every statistically viable model (models vary as functions of precision and complexity) which reproducibly maximizes the predictive accuracy for the sample.

View journal article

Some Machine Learning Algorithms Find Relationships Between Variables When None Exist — CTA Doesn’t

Ariel Linden & Paul R. Yarnold

Linden Consulting Group, LLC & Optimal Data Analysis, LLC

Automated machine learning algorithms are widely promoted as the best approach for estimating propensity scores, because these methods detect patterns in the data which manual efforts fail to identify. If classification algorithms are indeed ideal for identifying relationships between treatment group participation and covariates which predict participation, then it stands to reason that these algorithms would also be unable to find relationships when none exist (i.e., covariates do not predict treatment group assignment). Accordingly, we compare the predictive accuracy of maximum-accuracy classification tree analysis (CTA) vs. classification algorithms most commonly used to obtain the propensity score (logistic regression, random forests, boosted regression, and support vector machines). However, here we use an artificial dataset in which ten continuous covariates are randomly generated and by design have no correlation with the binary dependent variable (i.e., treatment assignment). Among all of the algorithms tested, only CTA correctly failed to discriminate between treatment and control groups based on the covariates. These results lend further support to the use of CTA for generating propensity scores as an alternative to other common approaches which are currently in favor.

View journal article

Weighted Optimal Markov Model of a Single Outcome: Ipsative Standardization of Ordinal Ratings is Unnecessary

Paul R. Yarnold

Optimal Data Analysis, LLC

This note empirically compares the use of raw vs. ipsatively standardized variables in optimal weighted Markov analysis involving a series for a single outcome—presently, ratings of sleep difficulties for an individual. Findings indicate that the raw score and ipsatively standardized ordinal ratings yield equivalent results in such designs.

View journal article

More On: “Optimizing Suboptimal Classification Trees: S-PLUS® Propensity Score Model for Adjusted Comparison of Hospitalized vs. Ambulatory Patients with Community-Acquired Pneumonia”

Paul R. Yarnold

Optimal Data Analysis, LLC

A recent article optimized ESS of a suboptimal classification tree model that discriminated hospitalized vs. ambulatory patients with community acquired pneumonia (CAP). This note suggests possible alternatives for two original attributes as a means of increasing model accuracy: patient disease-specific knowledge vs. “college education”, and patient-specific functional status and social support vs. “living arrangement”.

View journal article