TY - JOUR TI - The impact of error on offender risk classification DO - https://doi.org/doi:10.7282/T3P849H1 PY - 2013 AB - In criminal justice, offender risk classification seeks to divide individuals into different groups, normally so that varying levels of program treatment, custody, or supervision can be effectively and optimally allocated. The goal of effectively separating offenders based on prearranged criteria, however, is often thwarted by error problems, resulting in the misclassification of individuals. How the initial error problems eventually translate into final misclassification is not completely understood. Thus, the dissertation attempts to model the effects of error on the tolerance of offender risk classification instruments. Specifically, different properties and characteristics of classification devices are analyzed to understand their impact on the transfer of error from initial to final classification phases. Suitable risk data and instruments that would facilitate the testing of all proposed research questions and hypotheses in the current study are not readily available. This is because, in order to explore the different facets of the proposed inquiries, specific situations are requisite- and these particular situations may be easier tailored into a fabricated data than to be found in the real world. Thus, relying on both conceptual data and actual risk data, random and systematic error are simulated and injected into each risk instrument to gain insight onto how unreliability and invalidity statistically impact classification. The risk data are engineered using Monte Carlo Simulation: construction methods making use of random draws from an error distribution and multiple replications over a set of known parameters. This methodology is particularly relevant in situations where the only analytical findings involve asymptotic, large-sample results. Monte Carlo Simulations enables the construction of multiple datasets in a “laboratory setting” that would simulate data in the real world. This allows evaluations concerning the impact of different risk properties on the transfer of error to be made. For the current study, two main questions are asked: 1) what is the impact of error in risk data on overall classification outcomes; and 2) how does such error impact validity. The study found that risk tools generally have a low tolerance for error. The injection of 10 percent error into risk assessment information produced 25 to 40 percent error in classification outcomes. However, the injection of random error only minimally reduces classification validity by causing the subgroup recidivism/base rates for each category to mildly shrink towards the mean. Different risk tools and factors play a critical role in determining an instrument’s sensitivity to error. Specific risk properties such as dichotomous risk items, having fewer risk categories, risk items with lower weights, and having more risk items reduce the sensitivity of error in risk tools. A risk tool’s tolerance for error is, thereby, controlled by a confluence of factors. This dissertation facilitates a better understanding of the interplay between error in risk information and error in classification outcomes. The findings improve knowledge of the sensitivity of error in offender risk classification instruments. Furthermore, it explains how the sensitivity of error is aggravated or mitigated by the inclusion of different common risk device properties. KW - Criminal Justice KW - Criminals--Identification KW - Risk assessment KW - Criminal justice, Administration of KW - Monte Carlo method LA - eng ER -