JJIE Hub: Glossary — Evidence-based Practice

What follows are brief definitions of key terms or additional information about phrases used in the juvenile justice field in discussions about evidence-based practice. While not exhaustive, these are intended to provide additional information for users of the site.


Adjudicatory Hearing (or Adjudication)
The trial on the charges in a delinquency case is called an adjudicatory hearing or adjudication. The juvenile court judge hears the evidence and makes a determination as to whether or not a youth has committed a delinquent act. This hearing is similar to a trial in an adult criminal case, though in most states the youth cannot request a jury. At the adjudicatory hearing, the prosecutor must prove the case beyond a reasonable doubt.  The youth has a right to notice of the charges, a right to counsel, a right to confront witnesses, and a right to avoid self-incrimination.

Aggression Replacement Training
Aggression Replacement Training is a cognitive behavioral intervention program that concentrates on the development of youth’s individual competencies to help them improve social skill competence and moral reasoning and control angry impulses and aggressive behavior. The program targets chronically aggressive children and adolescents ages 12-17.[1]

Collaborative Decision-Making Body[2]
A broad group of systems professionals and stakeholders that may include provider agencies, judicial leaders, political leaders, practitioners, advocates, district attorneys, public defenders, and families and youth. Such a group comes together to plan the selection and implementation of evidence-based practices to serve justice-involved youth in a particular jurisdiction.

Cultural Competence
In the juvenile justice system, “cultural competence” refers to the ability of juvenile justice system professionals — such as police, probation and correctional staff, attorneys, judges, and program providers – “to understand and respect values, attitudes, beliefs, and mores that differ across cultures and to respond appropriately to these differences” in interacting with youth and their families, and in planning and implementing programs for them.”[3]

Cultural Responsiveness
Cultural responsiveness refers to tailoring services to particular cultural groups. It includes a variety of mechanisms, such as client-therapist matching, using culturally relevant terminology or communication styles, and treatment coordination with traditional healers.[4]

Effect Size
In the context of juvenile justice research, effect size is a measurement that quantifies the amount that a program has been shown to change an outcome (such as recidivism) for the program participants compared to the amount of change that would be generally expected without intervention.[5] Effect size is often calculated as a range above and below zero, with zero indicating no change. When the goal of the research is to see if (or how much) an intervention reduces recidivism, then the more negative the effect size, the greater the reduction. According to Dr. Jeffrey Butts at John Jay College of Criminal Justice Research and Evaluation Center, the most successful juvenile justice interventions generally have an effect size between -0.10 and -0.30.[6]

Evidence-based Practices
Evidence-based practices (commonly called “EBPs”), as applied to the juvenile justice arena, refers generally to programs, practices, and policies used to prevent and reduce youth crime that have been documented to be effective through rigorous evaluation.[7]

Functional Family Therapy (FFT)
FFT is a short-term (approximately 30 hours), family-based therapeutic intervention for delinquent youth at risk for institutionalization and their families. It is designed to improve family communication and supportiveness while decreasing intense negativity and dysfunctional patterns of behavior.[8]

Measurement Bias
“Measurement bias” as used here refers to “differences in . . . [the risk assessment tool’s] ability to predict reoffending among different racial and ethnic groups; in other words, whether the tool is equally valid to use with different racial and ethnic groups of youth.”[9]

Meta-Analysis
When performing a meta-analysis, researchers use statistical methods to compare multiple research studies of an intervention and draw conclusions about the intervention’s effectiveness.[10] See also, “systematic review,” below.

Multi-systemic Therapy (MST)
MST is an intensive family- and community-based treatment that addresses the multiple causes of serious antisocial behavior in juvenile offenders. Therapists work with youth and families to improve youth’s functioning in the real-world by making changes that promote prosocial behavior while decreasing antisocial behavior.[11]

Net Widening
Net widening refers to widening the “net” or reach of the justice system so that more youth are formally processed. This may occur when juvenile justice programs or practices end up being used in a way that sends youth into the justice system (or keeps them there) to receive treatment when they would not likely have entered or remained in the system otherwise.

Protective Factors
Protective factors decrease the potentially harmful effect of risk factors. Examples include an easy temperament, healthy social support, good problem-solving ability, or a strong commitment to school. Also known as buffers or strengths.[12]

Recidivism
Recidivism refers to the commission of a crime by a person known to have committed at least one previous offense. It can be measured in a number of different ways, such as through re-arrest, re-adjudication, and re-incarceration. Recidivism data can be gleaned from a variety of sources, such as self-report, arrest records, and court records.

Rigorous Evaluation/Research
A rigorous evaluation refers to a scientific study that typically involves either: (1) an experimental design, such as those used in randomized controlled trials, in which people who are similarly situated (such as male youth between the ages of 15-16 held at a particular detention center for similar offenses) are randomly assigned to a treatment group (those given the service or program) or a control group (those not provided the service or program) and the outcomes of the two groups are compared at the end of the study; or (2) a quasi-experimental design, in which a control group is not used and instead the group that receives the service or program is compared to a group of people that are as similarly situated to the participant group as possible, but who did not receive this type of service or program.[13]

Risk Assessment Instrument
This is a tool used to determine a youth’s likelihood of re-offending. A comprehensive risk assessment tool will try to answer the additional question of what factors in the youth’s life or which characteristics of the youth are likely driving him or her to offend and may lead to more offending.

Scared Straight
Scared Straight programs generally involve the interaction of at-risk youth or youth involved in the juvenile justice system with adult inmates who describe the brutal, negative conditions they experience while incarcerated in order to shock, scare, and deter youth from committing offenses.[14]

Standardized, Empirically-Validated Tools
This term refers to tools with the following characteristics:[15]

    • the tool is standardized or replicable, meaning it is implemented basically the same way every time it is used;
    • research has demonstrated its predictive validity to gauge risk; and
    • it uses empirically-supported risk factors – meaning that research has shown these risk factors to be related to future delinquent behavior.

Statistical Significance
Statistical significance is the likelihood that the effect observed in an experiment was due to more than just chance. It is stated in terms of the probability that the association would occur by chance alone (or “p-value.”) I.e. if an intervention shows a statistically significant reduction in recidivism expressed as p< .01, there would be less than a one percent chance that the drop in recidivism happened by chance.[16]

Systematic Review
A systematic review is a review of multiple primary research studies on a particular research question that tries to identify, select, synthesize, and appraise all the high-quality research evidence relevant to that question. Systematic reviews often use statistical techniques such as meta-analysis to compare the studies; results are integrated using a recognized methodology.[17]

Validation
Validation refers to a process by which the accuracy of a risk assessment instrument is tested in predicting the risk it was designed to assess, such as a failure to appear in court or a new arrest, adjudication of delinquency, or conviction.

Wilderness Challenge Programs
Wilderness challenge programs or camps are residential programs for youth that provide participants with physically challenging outdoor activities placing them in different natural environments depending on the program, such as forests, mountains, and deserts.[18]


Evidence-based Practices Sections

Hub Topic Button_KeyIssues_Black_text on grey background Hub Topic Button_ReformTrends_Black_text on grey background
Hub Topic Button_Resources_Black text on grey background Hub Topic Button_Experts_Black_text on grey background
Hub Topic Button_Glossary_Black text on grey background

Notes

[1] PennState EPiSCenter, “Evidence-Based Programs: Aggression Replacement Training,” accessed May 12, 2014.

[2] Mental Health and Juvenile Justice Collaborative for Change, “Implementing Evidence-Based Practices – Establish a Collaborative Decision-Making Body,” accessed September 6, 2018. 

[3] Curricula Enhancement Module Series, “Definitions of Cultural Competence,” National Center for Cultural Competence, Georgetown University Center for Child and Human Development, accessed September 25, 2018.

[4] Elizabeth Feldman, Eric Trupin, Sarah Walker, and Jacquelyn Hansen, “Evidence-Based Practices with Latino Youth: A Literature Review,” (Seattle, WA: University of Washington Department of Psychiatry and Behavioral Sciences, The Division of Public Behavioral Health and Justice Policy and Models for Change, November 25, 2013): 15.

[5] Elizabeth Drake, “The Effectiveness of Declining Juvenile Court Jurisdiction of Youthful Offenders” (Olympia, WA: Washington State Institute for Public Policy, 2013): 8,

[6] Jeffrey A. Butts, “Effect Size,” Evidence Generation, accessed April 7, 2014; Jeffrey A. Butts, “What’s the Evidence for Evidence-Based Practice?Research and Evaluation Data Bits (New York, NY: Research and Evaluation Center, John Jay College of Criminal Justice, City University of New York, 2012). See also Jeffrey A. Butts and John K. Roman, “Better Research for Better Policies,” in Juvenile Justice: Advancing Research, Policy, and Practice, Francine T. Sherman and Francine H. Jacobs, eds. (John Wiley & Sons, Inc., Oct. 2011), 511-514.

[7] Jeffrey A. Butts, “Evidence-Based,” Evidence Generation, accessed October 22, 2018; Mental Health and Juvenile Justice Collaborative for Change, “Overview: EBPs Defined,” accessed January 2, 2019.

[8] Blueprints for Healthy Youth Development, “Functional Family Therapy,” accessed May 12, 2014

[9] Gina M. Vincent, Laura S. Guy, and Thomas Grisso, Risk Assessment in Juvenile Justice: A Guidebook for Implementation, (Chicago, IL: John D. and Catherine T. MacArthur Foundation, Models for Change Initiative, November 2012): 44.

[10] “The Meta Analysis of Research Studies,” ERIC Clearinghouse on Assessment and Evaluation, Department of Measurement, Statistics, and Evaluation, University of Maryland, College Park, accessed April 8, 2014, citing Glass, 1976, p. 3.

[11] MST Services, “Multisystemic Therapy (MST),” accessed June 2, 2019,

[12] Vincent, Guy, and Grisso, 31, 34.

[13]Siobhan M. Cooney, Mary Huser, Stephen Small, and Cailin OConnor, “Evidence-Based Programs: An Overview,” What Works, Wisconsin – Research to Practice Series, 6 (Madison, WI: University of Wisconsin-Madison and University of Wisconsin-Extension, 2007), 2.

[14] Virginia Department of Criminal Justice Services.

[15] Vincent, Guy, and Grisso, 37-41.

[16] Jeffrey A. Butts, “Effect Size,” accessed April 7, 2014; Jeffrey A. Butts, “What’s the Evidence for Evidence-Based Practice?” Research and Evaluation Data Bits (New York, NY: Research and Evaluation Center, John Jay College of Criminal Justice, City University of New York, 2012); Robert Coe, “It’s the Effect Size, Stupid: What Effect Size is and Why It’s Important,” (paper presented at the British Educational Research Association annual conference, Exeter: September 12-14, 2002); and Jeffrey A. Butts and John K. Roman, “Better Research for Better Policies,” in Juvenile Justice: Advancing Research, Policy, and Practice, Francine T. Sherman and Francine H. Jacobs, eds. (John Wiley & Sons, Inc., Oct. 2011), 511-514.

[17] Julian PT Higgins and Sally Green, ed., Cochrane Handbook for Systematic Reviews of Interventions, 1.2.2 What is a Systematic Review?, Version 5.1.0 (updated March 2011), accessed August 4, 2014.

[18] Office of Juvenile Justice and Delinquency Prevention