Risk/Needs Assessments: Separating Fact from Fiction

The Northpointe Suite   •   An equivant product

Assessments provide a means of “applying empirical knowledge and research-supported principles to justice system decisions” (National Institute of Corrections: Evidence Based Decision Making). Criminal justice professionals use these tools at each decision point along the way: assessments have enhanced the pre-trial process by providing a robust alternative to the traditional bail system that overcrowd jails and disproportionately penalize low-income offenders who are unable to pay. Jails rely on the tools to secure or house inmates according to risk levels. And supervision agencies work to help offenders get treatment and build risk-reduction strategies tailored to their specific needs based on needs assessment data.

But here’s the catch: when analog meets digital, a lot can get lost in translation. Skepticism can overtake the conversation, and once we hear accounts of a “secret algorithm” replacing human decisions, it’s almost too easy to get carried away with notions of a sci-fi dystopian fiction where “the “computers are taking over” and we’re at the mercy of machines.

But that’s exactly what those ideas are: fiction. As we integrate risk/needs assessments more and more into our everyday processes, let’s take a step back and fact-check the most prevalent myths surrounding these tools so we can use them confidently, free of fear and misconceptions.

MYTH #1: Risk/Needs Assessments Issue Sentences in the Courtrooms 

Perhaps the most vital fact to know about risk/needs assessments is that they’re only tools – one of many used to gather information about an individual. At the end of the day, a sentence is issued only by a judge, not a computer program. Judges weigh numerous factors in every case, and must follow strict sentencing guidelines. This is the basis of how individuals receive sentences in courtrooms across the country.

In July 2016, the Wisconsin Supreme Court unanimously ruled that the state can continue to use risk/needs assessments to assist judges with sentencing. The court concluded that “…if used properly, observing the limitations and cautions set forth herein, a circuit court’s consideration of a COMPAS risk assessment at sentencing does not violate a defendant’s right to due process.” (The full opinion is available here.)

The Wisconsin Supreme Court decision reflects what users of risk/needs assessments have known for years: these tools are designed to help us, not replace us.

MYTH #2: Risk Assessments Are Racially Biased 

One of the most inaccurate misconceptions about risk assessments is that racial disparity is coded into the algorithm. The facts? A tool such as the COMPAS, a leading risk/needs assessment, is designed to consider several factors, but race is not among them. In fact, race is not part of any piece of any calculation used to compute a risk score in the software. COMPAS considers two types of factors: static and dynamic. Static factors include data that cannot be changed, such as prior criminal offenses. Meanwhile, dynamic factors include data that can be influenced, such as drug use or known association with other criminals. In either set of factors, data involving the race of the offender is not considered.

Claims of racial bias in risk/needs tools have been debunked. Researchers from California State University found several statistical and technical errors in a 2016 article by ProPublica that sought to prove racial bias. (You can read the full article here.) What makes the 2016 ProPublica article especially ironic is that risk/needs assessment tools can and often do curb the racial bias that can happen in a human-only decision-making process, which brings us to our next myth…

MYTH #3: Human Judgment Alone Is Infallible

We take pride in the precision of our own observations, so it’s not hard to see why some people may bristle at the idea of allowing software to support decision-making. The truth is, even the best and brightest of us are susceptible to biases (both conscious and unconscious), and human judgment alone is in fact quite vulnerable to misperceptions about the world we live in.

That’s why technology and data have played such a significant role in decision-support – not just in criminal justice, but in many other industries as well. For example, insurance companies use risk assessment outputs to calculate premiums. The financial industry uses credit risk assessments to process loans, as well as data mining and analytics to combat identity fraud. In these examples, as within our justice system, risk/needs tools help us by enhancing human judgment with outputs based on objective facts while maximizing speed, accuracy, cost savings, and consistency.

MYTH #4: It’s All About Risk

Though it’s baked right into the name, it’s easy to forget that these tools are as much about determining the needs of an offender as they are about understanding potential risks. Need assessments focus on the scope and type of treatment interventions that would most benefit a person, allowing for individualized plans that can make a greater impact. “Cookie cutter” approaches to treatment have proven as ineffective as ignoring needs altogether and hoping the punitive approach changes the cycle of crime. Having input on the needs involved benefits the person and our justice system alike, allowing for more information with which to support decisions and building the most cost-effective programs.

It’s easy to see why these four myths have gained some traction, especially in light of trending concerns about computers interfering in our lives or making grievous errors that can harm. These fallacies ultimately stem from a good place: they speak to our shared concerns about ensuring fair treatment and maintaining control of the system in which we live. It’s important to know the facts and learn the truth about how risk/needs assessments integrate with our day-to-day processes. Above all, we must not lose sight of the most vital fact: the algorithms aren’t in control. We are.