OP-ED: Evaluation Must Join Innovation When Using Evidence-Based Practices

Print More
Bill Baccaglini

Bill Baccaglini

Sylvia Rowlands

Sylvia Rowlands

For those of us who have been at the forefront of the adoption of evidence-based practices (EBPs) for treating at-risk youth and families, the debate among professionals in this field has taken an interesting turn. With EBPs yielding excellent results in a variety of environments and across cultural settings, it now seems as if the composition of the model itself has become the focal point of debate.

Why not treat EBPs as a base, some argue, and adapt it to account for community, cultural or other population differences? “We know our population,” the argument seems to go, “and shouldn’t view EBPs as a one-size-fits-all solution.” While that argument sounds reasonable, it actually presents a number of issues and has the potential to undermine the credibility of the EBP movement through subjectivity, opacity and inconsistency.

Even though EPBs are still at an early stage in their adoption nationally, there is already some compelling data. In programs certified by the Blueprints for Healthy Youth Development at the University of Colorado Boulder, outcomes include:

  • 25-70% reductions in rates of re-arrest
  • 47-64% reductions in out-of-home placements
  • Youth 46% less likely to begin using illegal drugs
  • Youth 27% less likely to begin using alcohol
  • Youth 33% less likely to hit someone.

At The New York Foundling, one of New York’s oldest and largest child welfare agencies, our BlueSky Program has seen similar outcomes. For a one-year period, compared to the New York City Office of Children and Family Services’ traditional programs, we saw:

  • 33% fewer arrests
  • 41% fewer felony arrests
  • Diversion of 223 youth from placement with an average cost of $210,000 each.

Obviously, findings to date are based on short periods of time since most EBP programs have been implemented only recently. But the dramatic results are impossible to ignore. With government budgets increasingly tight, the rationale behind these interventions is compelling: The BlueSky program alone has produced not only a remarkable improvement in outcomes, but a cost saving to New York’s taxpayers of more than $35 million.

With these strong initial findings and increasing buy-in from professionals in the field, many policymakers and providers are considering the application of EBPs to their local needs. In doing so, they are exploring whether modifications are necessary and, if so, how far those modifications should go.

Is there anything wrong with that? Isn’t innovation a good thing?

Unfortunately, allowing widespread tinkering with EBPs would be tantamount to allowing a physician to take an FDA-approved drug therapy and “adapt” it — taking components out, putting different components in, changing the dosage and then reporting only the positive outcomes without peer review. Some would claim such practices as innovations. But for the most part, a physician who did that would be widely discredited among serious professionals.

At The New York Foundling, we’ve had to deal with these issues ourselves. We’ve worked closely with EBP developers and have seen significant initial success with a modified program that applies the Functional Family Therapy (FFT) model to the child welfare system. We’ve learned a number of important lessons through this process.

First of all, while ideas for modifications are welcome and encouraged, they should only be done in collaboration with the EBP’s developers. When any program is ready to implement, there must be an evaluation process in place that includes a time frame for reporting results. The evaluation must include every aspect of the process, so we can recognize if one part works and another doesn’t. It must focus on outcomes and everything must be reported — the good with the bad. Evaluations should be equally rigorous whether the program is a modification of an established EBP or a newly developed local initiative.

Government agencies that fund these programs should require these transparent evaluations and in-depth reporting of data, so that they and other professionals can review them, comment on them and learn from them. If government agencies require it and information is transparent and available to all, it will eventually lead to greater uniformity in standards that everyone in this field can measure their programs against.

Critics may argue that one size doesn’t fit all — that they need to be able to adjust the protocols to meet their localized needs. The Foundling’s experience in a very diverse marketplace belies that argument. We see proven EBP protocols working effectively across a wide variety of demographic and cultural contexts.

Others say that EBPs are no substitute for clinical relationships. They are correct — they are not a substitute. In fact, these interventions are among the tools to be used within that clinical relationship. Most EBPs recognize the family as the client and provide the clinician the skills to help youth and their families by repairing family bonds, changing family interactions and improving relationships.

But the bottom line, if one is arguing for modifications in a particular EBP program, is this: Rigorous, transparent evaluation is the key. Even if modifications are developed in partnership with the original developers, they must be subjected to careful and detailed scrutiny. Without that, we may find ourselves with a multitude of programs — and new ones hitting the marketplace constantly — all calling themselves EBPs and using anecdotal, unscientific evidence to market themselves as new “cures.”

The last thing we want to do is hinder innovation. We should all be in favor of it — in youth and family therapies, just as we are in medical and drug therapies. We need to allow for it both within the EBP context and independent of it, as we conceive of new improvements to our current treatment methods. No one should stand in the way of locally developed program models designed to meet existing unmet needs.

However, these programs must specify for whom they work and under what conditions as well as including an evaluation plan with specified timeframes. Only when innovation is evaluated under generally accepted standards can it be built upon and broadly embraced.

Bill Baccaglini is president and CEO of The New York Foundling and Sylvia Rowlands is senior vice president for evidence based programs of The New York Foundling.

2 thoughts on “OP-ED: Evaluation Must Join Innovation When Using Evidence-Based Practices

  1. Pingback: Allowing Innovation within EPBs | NCMHJJ

  2. I totally agree with your insistence that practices be built upon solid evaluation, and that we should not freely tinker with program elements that have been proven effective. But, the push back surrounding innovation is not simply about adapting existing programs. It’s about the need to continue searching for different, and more effective solutions. The marketplace for EBPs is not free and unfettered. It is shaped by politics, ideology, and the self-interested competition for resources. If we blindly assume that those interventions that have managed to win the competition so far are the best solutions, we are complicit in all of the ideological and political biases that govern the marketplace.