WASHINGTON — Using evidence-based programs in juvenile justice means more than pulling brand-name interventions off the shelf, says a leader in the field.
Policymakers and practitioners have a large body of research to draw on to determine whether a particular program has promise, including those of the mom-and-pop variety, said Shay Bilchik, director of the Center for Juvenile Justice Reform at Georgetown University.
But knowing a program has potential isn’t the only factor for success. Instead, programs also need to be embedded in a work culture that recognizes the importance of assessing juveniles’ risks and needs, matching juveniles to the programs best suited for them and continually evaluating a program’s effects, he said during a Forum for Youth Investment webinar today.
Without those steps, even the best-designed programs with the best-intended staff are undermined. For example, a program that works for juveniles at moderate risk for reoffending isn’t the proper place for their high-risk peers. A “misdiagnosis” can increase the odds of recidivism rather than driving them down.
“I’m setting up that provider for failure. I’m setting up the child for failure,” Bilchik said. The center at Georgetown has worked with communities to develop their evidence-based practices.
Karen Pittman, president and CEO of the Forum for Youth Investment, said the conversation about evidence is an important one for the field to understand as funders and lawmakers across the country are more eager than ever for information about what works.
The goal should be to move from talking about proven programs to picking apart why they work and applying those findings in communities, she said.
Juvenile justice is a rich field for evidence-based practices because the fears of youth crime in the 1980s and early 1990s spurred research into how to intervene in the lives of young people. The research led to the development of programs now considered the gold standard but sidelined other programs that lacked the resources to rigorously test their ideas, Bilchik said.
However, a 2010 meta-analysis led by Mark Lipsey and released through the Center helped broaden the field’s ideas about evidence and how to apply it by showing how to evaluate programs that hadn’t been through thorough testing.
“We moved from this mindset that you have to be one of these programs that are franchised — and those are good programs, but not to the exclusion of others,” Bilchik said.