Eau Claire County nationally recognized for evidence based decision making initiative
Implementation
- Phase II/III Local Sites
- Phase V State and Local Sites
Tools and Resources
- Framework (icon)
- Starter Kit (icon)
- More…
EBDM News
8a: Building a Plan for Implementation
EBDM Starter Kit
8a: Building a Plan for Implementation
Navigating the Roadmap
Activity 8: Develop a strategic action plan for implementation.
Considerations for Developing Harm Reduction Goals and Objectives
- How will the jurisdiction benefit as a whole (i.e., what are the intended harm reduction outcomes)?
- How will the criminal justice system benefit from movement to an EBDM-based system?
- What is an EBDM system intended to achieve or produce?
- What significant changes do you expect from the implementation of EBDM in terms of system operation?
- How will the costs to operate the system change?
- How will case processing change at point of entry into the system, during the adjudication process, post-adjudication, and/or at point of release?
- How will those in the system (i.e., victims, witnesses, and defendants) view the process?
- How will EBDM impact those working in the system?
- What types of information will convince you and others (including the public and funders) that the system is operating at an optimum level?
- What types of information will convince you and others that the system is achieving what it is intended to achieve?
For more information, see 6a: Measuring Your Performance
Introduction
During the EBDM Initiative, your policy team has undertaken a number of preparation activities for implementing the Framework. These activities include
- building a collaborative, multidisciplinary policy team;
- preparing the team members’ individual agencies for change;
- understanding current practice within each agency and across the system;
- understanding and increasing your jurisdiction’s capacity to implement evidence-based practices;
- developing logic models;
- establishing common harm and risk reduction outcomes and performance measures (and displaying them on a system scorecard); and
- developing plans for engaging broader support for the Initiative.
The culmination of these preparations leads your team to this final, but critically important, step: to develop a strategic action plan for implementation.
Purpose
To create a clear, specific, measurable plan for implementing the policy and practice changes that the policy team agrees will advance evidence-based decision making in your jurisdiction and that will support the achievement of the justice system’s vision and goals.
Participants
All policy team members should be involved to some extent in the development of your implementation plan, particularly in the development of harm reduction goals and objectives. After these decisions have been reached, staff internal to the agency(ies)—usually with some background in conceptualizing, planning, and implementing policy or program initiatives—and/or outside experts can assist in the development of the implementation plan, with guidance and input from the policy team.
Instructions
A number of preparation and self-assessment activities must occur simultaneously to lay the groundwork for implementing the EBDM Framework in a jurisdiction. These activities include developing harm reduction goals, objectives, and action steps; developing a systemwide logic model; drafting a communications strategy for gaining the buy-in of a broader set of stakeholders or the public; and creating a systemwide scorecard.1 While every team will not develop its plan in the same way, the following steps are important to developing a comprehensive implementation plan:
- Discuss and agree upon your team’s harm reduction goals, if your team has not come to some agreement on this already.2
- Develop logic model(s). At a minimum, the team should develop a systemwide logic model that clearly outlines the path to achieving the team’s top harm reduction goals.3 This activity will assist the team in developing many of the pieces of its implementation plan.
- Develop objectives (which should be represented as outcomes in your logic model). Remember, while goals represent the desired end results of the system, objectives define the short-term indicators that demonstrate progress toward goal attainment and describe who or what will change, by how much, and over what period of time.
- Define the action steps that will be necessary to achieve your harm reduction goals. (The major action steps can be found in the activities section of the logic model.)
- Determine who from your jurisdiction will take the lead and who will need to be involved in these steps.
- Determine the timing and sequence of these steps.
- Consider any potential barriers to your work plan and strategize about how your team will overcome them. Barriers can be determined by considering the contextual conditions (i.e., the environment in which the local justice system operates, including political, economic, social, and cultural factors) that your team identified in your logic model.
- Discuss how your team would like to engage a broader set of stakeholders and/or the public in EBDM, if you have not done so already.4 Ensure that any agreements regarding this strategy are reflected in your work plan; these may encompass goals, objectives, and/or action steps, as appropriate.
The chart below displays these steps and indicates how these multiple activities might fit together.
Possible Steps to Developing an Implementation Work Plan |
|
Step 1: Develop Harm Reduction Goals |
Develop the long-term harm reduction goals your team seeks to achieve. Your harm reduction goals are recorded on your system scorecard. Harm Reduction Goal Example: Increasing the success rate of individuals who become involved in the justice system from the 2010 rate of x% to y% by 2014 |
Step 2: Develop a Logic Model |
After recording your harm reduction goals as the impacts on the logic model, follow 5a: Building Logic Models in order to determine the
Once your logic model is complete, you can use the information it contains to build the rest of your work plan and scorecard. |
Step 3: Develop Objectives |
Objectives define the short-term indicators that demonstrate progress toward attaining your harm reduction goals and describe who or what will change, by how much, and over what period of time. Your objectives are the short-term outcomes on your logic model. Objective Example: Decrease of X% in low risk defendants held in jail awaiting adjudication within X months |
Step 4: Develop Action Steps |
Action steps are the “activities” on the logic model—the steps that must be taken to reach the objectives that will lead to your harm reduction goal. Since only major activities are likely included on the logic model, expand these—if and as needed—on your work plan to reflect all of the planned action steps. Include as an action step on the work plan the development of agency-level logic models for all agencies significantly involved in the achievement of the objectives. Actions Step Example: Train pretrial staff on use of assessment tool. |
Step 5: Determine Who Is Responsible/ Involved |
Determine the person(s) responsible for accomplishing each action item, the person(s) responsible for decision making, needs related to resource allocation, and coordination with other entities. Record these assignments on the work plan. |
Step 6: Determine Timing and Sequencing | Define the timing and sequencing of the action steps. Record this information on the work plan. |
Step 7: Recognize Potential Barriers to Implementation | Consider the contextual conditions in your logic model and describe the potential barriers to implementation and strategies for addressing these barriers. Record these on the work plan. |
Step 8: Develop a Communications Strategy |
If one or more harm reduction goals in your work plan do not include engaging new stakeholders, increasing support and engagement from the community, or communicating the jurisdiction’s harm reduction goals to the public, develop a strategy for doing so. Include it as an objective with action steps on the work plan. Refer to 7a: Developing a Communications Strategy. |
A template of a work plan is provided in the Appendix. It illustrates how the multiple elements of the work plan might be displayed in chart form.
Tips
- It may not be possible to forecast the very specific steps for activities that will be accomplished in the later months; try to develop in more detail the more immediate tasks (i.e., 3–4 months) that need to be accomplished.
- Teams may find that creating a visual timeline, separate from the work plan, is helpful in organizing the many anticipated tasks. An example of a timeline is provided.
- If certain baseline data is not available, make sure to include in your work plan the anticipated steps your team will need to take to collect it.
- Teams should revisit their implementation plans regularly to make revisions and adjustments as needed.
1 See 6b: Developing a Systemwide Scorecard.
2 For more detailed information on this process, see the first step in 6a: Measuring Your Performance.
3 See 5a: Building Logic Models.
4 See 7a: Developing a Communications Strategy.
Examples
Mesa County, Colorado, Work Plan for Implementation (Excerpt from Full Document)
Eau Claire, Wisconsin, Timeline for Implementation
Additional Resources and Readings
CSOM. (2007). Enhancing the management of adult and juvenile sex offenders: A handbook for policymakers and practitioners. Retrieved from http://www.csom.org/pubs/CSOM_handbook.pdf
CEPP. (2005). Collaboration: A training curriculum to enhance the effectiveness of criminal justice teams. Retrieved from www.collaborativejustice.org/docs/2005 Collaboration Curriculum.pdf
McGarry, P., & Ney, B. (2006). Getting it right: Collaborative problem solving for criminal justice. (NIC Accession No. 019834). Retrieved from http://nicic.gov/Downloads/PDF/Library/019834.pdf
Appendix
PDF/Printer Friendly Version of Section
Activity 8: Develop a strategic action plan for implementation
EBDM Starter Kit
Activity 8: Develop a strategic action plan for implementation
The culmination of the process of building a justice system based on evidence-based decisions is the creation of a clear, specific, measurable plan for implementing the policy and practice changes that the policy team has agreed upon. This plan will serve as the new roadmap for the team as it begins to implement its steps to harm and risk reduction.
Elements of an EBDM justice system include a clear, specific, measurable implementation plan.
7a: Developing a Communications Strategy; Building Stakeholder and Community Engagement
EBDM Starter Kit
7a: Developing a Communications Strategy; Building Stakeholder and Community Engagement
Navigating the Roadmap
Activity 7: Engage/gain the support of the community.
Introduction
The EBDM Initiative seeks to create a set of conditions under which harm and risk reduction are realized to their true potential. Agreement on a systemwide vision and methods to assess its achievement, collaboration at the policy level, and careful analysis and application of the research in ways that ensure evidence-based decision making and practice are all important but insufficient to the achievement of desired outcomes. Without the understanding and “buy-in” of stakeholders—both within the system and, as importantly, in the public—change of the order described in the Framework is unlikely to take root and flourish.
Purpose
Broadly, the purpose of developing a communications strategy is to facilitate understanding of, and support for, evidence-based decision making policies and approaches. The specific aims of a jurisdiction’s communications plan include the following:
- To raise awareness and educate stakeholders about the value of evidence-based decision making as an enhancement to existing justice system practices.
- To engage interest in, and support for, such an approach among those who oversee, work within, interact with, and/or are affected by the local criminal justice system.
- To engage stakeholders in a purposeful way in the identification and/or implementation of harm reduction strategies that will support healthier communities.
Participants
This document was developed for EBDM policy teams (and/or their work groups) to advance their efforts to engage stakeholders—both internal and external to the justice system—in the EBDM Initiative and in jurisdictions’ broad harm reduction goals.
Instructions
To begin, consider the following questions to ensure a thorough understanding of the place from which your communications planning effort is starting:
Adopting a Consistent Message
While Milwaukee County, Wisconsin’s communication plan involves outreach to a variety of audiences, including business leaders, citizens, elected officials, educators, and the media, a consistent message is communicated:
Our commitment to the discipline of EBDM will enable us to hold offenders accountable, reduce the overall crime rate and recidivism, and give taxpayers a better return on the dollars they invest in criminal justice.
- Who are the audiences you are trying to reach? Consider those within the local and perhaps state justice system (e.g., policymakers, supervisors, and/or staff) and audiences external to the justice system (e.g., community leaders, the general public).
- What information are they currently receiving?
- Who communicates with these audiences regarding justice-system related matters in an official communications capacity (e.g., public information officers) and/or as part of their role (e.g., chiefs of police or district attorneys conduct routine roundtables with civic groups; probation officers and detectives are members of ad hoc public education committees that educate communities on offender reentry issues)?
- For each audience identified, think about (and perhaps create a matrix that identifies) the following:
- What are the one or two primary ways to reach each audience (e.g., newspaper article, radio broadcast, speech, on the Web)?
- What do you want each audience to know?
- What is their current base of knowledge—that is, where are you starting from? Is this a well-informed audience?
- What is the audience’s perspective? For example, does this audience have a positive viewpoint on the topics you want to discuss?
- Who is best positioned to communicate with this audience, and how?
- In what ways are current communication efforts working effectively?
- In what ways could or should these efforts be expanded?
- Based upon the answers to these questions, and after reviewing the “Tips” section below, build an action plan for your communications strategy. It may include specific, one-time events for specific audiences (e.g., a presentation to the business leaders’ quarterly network meeting; a briefing of justice system professional staff on the EBDM Initiative) or a series of events for various audiences (e.g., a series of briefings over the course of six months with three specifically identified local journalists; a series of training events on specific topics for a multidisciplinary group of professional staff). It will likely include a mix of long-term, big-picture topics (e.g., how the justice system operates, strategic action plans being developed or underway) and specific event-related strategies (e.g., highlighting the story of an offender who successfully completed supervision, launching a new program or policy approach).
Tips
Grant County, Indiana’s Core Message
EBDM is the thoughtful stewardship of the public’s money and trust in operating an efficient and effective criminal justice system.
One less offender.
One less victim.
- Consider crafting a set of communications messages. Possible examples include the following:
- Our communities can do better (than a 67% failure rate); we can create safer communities; we can reduce harm; we can have fewer crimes and fewer victims.
- A local criminal justice system informed by research can point the way because it places the highest premium on outcomes, on the individual and institutional actions that produce them, and on the careful, ongoing measurement of them.
- An evidence-based approach should not replace discretion and judgment, but it can inform and guide that judgment to enhance the likelihood that desired outcomes will be achieved.
- A common local vision, internal collaboration, interagency partnership, public involvement, and shared responsibility are indispensable building blocks for alleviating community harm.
- Consider developing an identity. One resource is provided by the national EBDM Initiative team; jurisdictions may choose to adopt this identity or to develop their own.
- The national EBDM Initiative team created an “interactive” graphic that encourages decision makers to “complete” its concept with one or more words capturing the forms of harm they, their staffs, and their communities most desire to reduce (i.e., the phrase “One less _____” accompanied by “A strategy for safer communities” as its tagline). The graphic was designed to stand alone as a deliberately incomplete thought to pique curiosity or, for particular audiences, to be filled in with words such as “victim,” “crime,” “inmate,” “offender,” “dollar spent,” “officer injured,” or “court case.”
- Consider developing communications tools and materials, for example:
- a scripted “elevator speech” incorporating the key messages. (An “elevator speech” is an overview of an idea for a product, service, or project. The name reflects the fact that it should be possible to deliver the speech in the time it would take for an elevator ride, that is, approximately 30 seconds.)
- local criminal- and victim-focused case stories that have strong emotional impact.
- well-designed, appealing pamphlets that replicate the elevator speech in bullet form and include human interest stories.
- video clips by local champions that illuminate the aspirations of local policymakers, specific approaches or challenges, etc.
- a presentation of the overall project to be used at stakeholders’ meetings (i.e., a core set of slides augmented by stakeholder-specific slides and jurisdictional findings from the assessment phase). The EBDM Initiative team has developed a “core training curriculum,” available on SharePoint, that can be tailored for local purposes.
- stakeholder-specific material for staff on the elements of the Framework and the jurisdiction’s implementation plan that is applicable to their role in the justice system.
- training materials for line staff that tie specific policy and procedure changes to specific research supporting such changes.
- print communications (e.g., posters, banners, brochures, progress reports) directed at staff and displayed in offices. Examples include a “One Less” brochure or “One Less” posters that feature the name and photo of EBDM policy team members and their “One Less” aspirations. (See an example below.)
- promotional items and giveaways for staff (e.g., t-shirts, coffee mugs, and/or pens) that encourage the Initiative and remind and excite staff about change.
- Consider conducting a public opinion survey and/or focus groups.
National Survey on EBDM
In Phase I of the EBDM Initiative, the national Initiative team worked with Zogby International to develop and administer a national public opinion survey.
This tested survey offers a model that could be replicated at the local level and a set of findings against which local results can be compared. If the data align with the findings of the national poll, they will provide the impetus, as well as political coverage, for difficult decisions. Even in the event that the data do not align with national findings, they will become an integral part of the development of local messages.
A fact sheet that summarizes the findings of the national public opinion survey can be found here: fact sheet
- Conduct a public opinion survey that measures citizens’ opinions on the justice system, its purpose, and the extent to which the system should rely on research, and citizens’ satisfaction with current justice system outcomes. For a list of questions that were used in a national survey, see the Appendix.
- Using subject matter experts, convene focus groups with the general public to better understand their views on matters related to the justice system and evidence-based decision making and/or as a means to effectively communicate with and engage citizens on these matters.
- Consider the development of a deliberate and purposeful public communications strategy using the media and other means.
- Prepare news releases and Op-Ed pieces; talking points for speeches at local gatherings, professional conferences, radio talk or call-in shows, news conferences, one-on-one meetings and open houses at stakeholders’ offices, newspaper editorial board meetings, etc.; public service announcements; and print communications campaigns (e.g., posters, brochures, press kits, web-based reports).
- Understand the research and collect data.
- Examine the research on effective communication strategies and campaigns to determine how this body of knowledge can best inform and shape your own efforts.
- Collect qualitative and quantitative information to determine the extent to which your communications tools and methods accomplish their intended purpose. One qualitative method of measurement would be to conduct a series of focus groups with local system stakeholders and the general public. Quantitative methods of measurement would involve pre- and post-testing of training modules, pre- and post-measurements of staff attitudes, or a fuller use of local public opinion polling. One possible strategy is to conduct a baseline poll at the launch of the communications strategy and then a second poll at a specified date in the future to measure change in both public and staff attitudes.
Examples
Charlottesville-Albemarle County, Virginia, EBDM Initiative Flyer
Eau Claire County, Wisconsin, Article in Rotary Club Newsletter, January 10, 2011
Charlottesville-Albemarle County, Virginia, “What One Less Means to Me” Marketing Tool
Ramsey County, Minnesota, EBDM Brochure
Additional Resources and Readings
See http://media.csosa.gov for an example of one jurisdiction’s efforts to communicate with stakeholders.
See http://ebdmoneless.org for a web page your local jurisdiction can link to, build upon, and/or replicate.
Appendix
EBDM Public Opinion Survey Questions
PDF/Printer Friendly Version of Section
Activity 7: Engage and gain the support of a broader set of stakeholders and the community
EBDM Starter Kit
Activity 7: Engage and gain the support of a broader set of stakeholders and the community
Building awareness and understanding and soliciting support for EBDM from a broader set of stakeholders and from the community is critical to the achievement of successful systemwide change. Developing a communications strategy is therefore critical to achieving greater buy-in to the harm reduction outcomes that a policy team hopes to achieve.
Elements of an EBDM justice system include a strategy for engaging stakeholders within the justice system and those in the broader community in meaningful dialogue about the vision/goals of the justice system, the state of knowledge and research, and the local system’s performance in achieving these goals.
6b: Developing a Systemwide Scorecard
EBDM Starter Kit
6b: Developing a Systemwide Scorecard
Navigating the Roadmap
Activity 6: Establish performance measures/outcomes/system scorecard.
Introduction
Historically, criminal justice agencies and their allied partners have developed independent methods to describe and measure their performance. Police agencies report on crime trends, arrests made, and the elapsed time between calls to dispatch and the arrival of patrol cars on the scene of a crime, for instance; courts report on case processing, fines imposed and collected, and cases settled by plea, bench, and jury trial; probation agencies report on numbers of individuals supervised, assessments conducted, and cases closed by successful termination. Rarely if ever do justice systems report on their progress in achieving their harm reduction goals and objectives. Examples of systemwide harm reduction goals and objectives include (but are not limited to)
- reduced justice system costs as a result of a combination of activities that reduce the demand for jail beds and correctional staff, and the time associated with judicial processing. These activities may include conducting pretrial screening and diversion at police substations, establishing alternative responses to the acutely mentally ill, and addressing probation violators administratively rather than through the court system; and
- increases in the success rate of offenders, as a result of: improving adherence to the risk principle at the arrest, pretrial, plea, sentencing, and supervision decision points; “matching” offenders to appropriate services (e.g., by prosecutors and defenders in plea negotiations, by judges at sentencing, jailers operating risk reduction programs, probation officers making referrals to community treatment programs); and employing professional skills to positively influence defendant and offender behavior.
Purpose
The purpose of developing a systemwide scorecard is to “measure what matters.” While the measurement of “activities” (e.g., pre-plea assessments of defendants, quality assurance to determine whether risk tools are completed properly) and “outputs” (e.g., percent of professionals trained in the use of a new tool or methodology, percent of sentence conditions informed by risk/needs assessments) in the system logic model is important, these are means to an end, not the end themselves. Articulating the ends we seek to achieve—and measuring those—focuses attention on the work that is critical to achieving a jurisdiction’s vision of the justice system. It also equips leaders with statements of intent they can use to clearly communicate with community members and other stakeholders about the purposes and goals of the justice system.
Participants
This document was developed to assist EBDM policy teams in identifying the harm reduction goals they seek to achieve through through their policy change work. All policy team members should be involved to some extent in the development of your harm reduction goals and scorecard.
Instructions
- Working as a team, identify the evidence-based decision making changes that are under consideration.1 Using the logic model template, identify the “impacts” you want to achieve through these policy change initiatives. These impacts are your jurisdiction’s harm reduction goals.2
- List the goals on a flip chart. As a team, determine whether you have consensus around the importance of each goal. If not, work to achieve consensus.
- Examine the examples of scorecards contained in this kit. As a team, agree to adopt a design for your scorecard, either by selecting one of the templates provided or by creating your own. Include your “identity” on your scorecard.3
- Next, discuss and agree with your team how you will measure the system’s performance in regard to each of these harm reduction goals. These discussions may be lengthy and may require expert consultation from those within your agencies and system—particularly your research, planning, and information technology staff—and perhaps outside expertise.4
- Once the methods to collect and assess performance on your harm reduction goals are determined, be sure to collect baseline data.5 Baseline data indicates your “starting place,” or basis of comparison.
- Finally, discuss how and when the scorecard data will be collected and used. Be clear and specific about this; there is no sense in establishing goals that will not be measured or in collecting data that will not be analyzed and examined for its implications. Perhaps the policy team will task specific individuals with collecting and analyzing performance measurement data and reporting this information back to the policy team on a quarterly basis. Results may be included in agencies’ annual reports or in periodic press briefings. Most importantly, if reported results are less than expected, it is critical that the policy team reexamine the conditions, assumptions, resources, activities, outcomes, and outputs related to the implemented policy and practice changes to determine why the expected results have not occurred, and that the team make appropriate modifications so that results do, in fact, improve over time.
Tips
- Don’t attempt to develop a lengthy list of scorecard items. Agreeing on two, three or four significant goals that everyone is in full agreement with is superior to a laundry list of less significant accomplishments, or goals that do not have full support of the full team. In addition, as a part of your communications strategy, you won’t want the scorecard to be too lengthy, or to lack support of the full team.
- Be clear regarding your definitions for key words. For example, “recidivism” is often defined in multiple ways. Refer to the starter kit on Measuring Your Performance for a list of definitions that you might choose to draw from, or at least use as a starting place for the development of your own definitions. Whether you use the provided definitions, or definitions of your own making does not matter; what matters is that you are clear on what you mean by these terms, and that your team is in agreement on these definitions.
- Follow the SMART principle when developing goals for your scorecard:
- Be Specific
- Make them Measurable (i.e., quantifiable)
- Be Action-oriented
- Be Realistic
- Articulate a Time in which the change will occur
- When you’ve completed your list of harm reduction goals/scorecard items, it should elicit a reaction of satisfaction. Ask your team, “Would you feel proud to have been a part of the achievement of these goals?” When everyone responds in the affirmative, chances are you’ve succeeded in the development of your scorecard.
1 For more information, see: 3d: Gathering Baseline Data.
2 See 5a: Building Logic Models and 6a: Measuring Your Performance.
3 For more on developing an identity, see 7a: Developing a Communications Strategy; Building Stakeholder and Community Engagement.
4 See 6a: Measuring Your Performance.
5 See 3d: Gathering Baseline Data.
Examples
Eau Claire County, Wisconsin, System Scorecard
Charlottesville -Albemarle County, Virginia, System Scorecard
Mesa County, Colorado, Systemwide Scorecard
Additional Resources and Readings
NIC. (2010). Achieving, measuring, and maintaining harm reduction and advancing community wellness. A Framework for Evidence-Based Decision Making in Local Criminal Justice Systems (pp. 22). Retrieved from http://cepp.com/wp-content/uploads/2015/12/A-framework-for-evidence-based-decision-making-in-local-criminal-justice-systems.pdf
Minnesota Department of Administration. (2002.) Minnesota milestones: Measures that matter. Retrieved from http://www.mnplan.state.mn.us/mm/
PDF/Printer Friendly Version of Section
6a: Measuring Your Performance
EBDM Starter Kit
6a: Measuring Your Performance
Navigating the Roadmap
Activity 6: Establish performance measures/outcomes/system scorecard.
Introduction
Performance measures are tools for managing the performance of an agency, organization, or even a system. Performance measures provide benchmarks about whether or not optimum performance by the criminal justice system (and the entities within it) is being realized and, more importantly, whether the system is achieving what it intends to achieve under the evidence-based decision making (EBDM) framework. The use of performance measures provides a way to understand quantitatively the business processes, products, and services in the justice system. In a nutshell, performance measures help inform the decision making process by ensuring that decisions are based on clearly articulated and objective indicators. Moreover, undertaking and institutionalizing performance measurement throughout the criminal justice system allows policy discussions and decisions to be “data-driven,” which in turn helps build the foundation for additional local evidence about what works.
In general, performance measures for the justice system fall into four categories:
- Effectiveness and the extent to which the intended outcomes are being produced
- Efficiency measures that demonstrate whether maximum outcomes are being produced at minimum cost
- Measures of satisfaction and quality to assess if the right processes are being used and the degree to which there is “satisfaction” with the processes1
- Timeliness in terms of the extent to which activities or processes occur within predefined time limits
Performance measurement is often confused with program evaluation because both attempt to capture quantitative information about desired goals and outcomes. Some key differences should be noted. First, program evaluation involves the use of specific research methodologies to answer select questions about the impact of a program. Performance measurement, on the other hand, is simply the articulation of performance targets and the collection/analysis of data related to these targets. Second, program evaluation is designed to establish causal relationships between activities and observed changes while taking into account other factors that may have contributed to or caused the changes. On the other hand, performance measurement simply provides a description of a change, but cannot be used to demonstrate causality. Third, program evaluations are usually one-time studies of activities and outcomes in a set period of time, whereas performance measurement is an ongoing process.
As you begin the process of defining performance measures, there are seven rules that need to be kept in mind. Performance measures should be
- Logical and related to goals
- Easy to understand
- Monitored regularly
- Readily accessible
- Based on specific benchmarks
- Quantified and measurable
- Defined with specific performance targets
Purpose
This starter kit is designed to help jurisdictions understand performance measures and to provide a guide for the development and implementation of performance measures systemwide. Information about the key steps in performance measurement is provided in addition to sample performance measures. It is important to note, however, that performance measures should be locally defined and driven; as such, the sample measures may or may not be relevant in a specific jurisdiction, depending on the focus of the local initiative. Finally, tips are offered for the implementation and use of performance measures.
Using Information Dashboards To Make Law Enforcement Decisions
Law enforcement has long understood the importance of routine performance measurement. By using the “dashboard” approach—that is, putting a spotlight on key information on a routine basis—law enforcement agencies around the country are using data to assess performance and adjust activities based on key outcome measures.
Police Chief Bence Hoyle, of Cornelius, North Carolina, states that such dashboards should
- identify and disseminate information about criminal activity to facilitate rapid intervention;
- identify and disseminate information about crime to assist in long- and short-term strategic solutions;
- allow agencies to research the key incident data patterns, such as modus operandi, repeat offender locations, or other related information, such as traffic stops near the scene, so suspects can quickly be identified;
- provide data on the effectiveness of specific tactics, in near real-time, through focused views; and
- support the analysis of workload distribution by shift and geographic area.
For more information, see “Dashboards Help Lift the ‘Fog of Crime'” at http://www.theomegagroup
.com/press/articles/
dashboards_help_lift
_the_fog_of_crime.pdf.
Participants
Development of performance measures should involve a variety of stakeholders. At a minimum, the leadership of the various components of the justice system, along with some line level representatives, should be part of the process. The leadership can provide the broad systemic perspective about how the system should be performing under an EBDM initiative and how each agency/entity within the justice system contributes to overall system performance. The inclusion of line personnel, however, provides a different level of detail and, to some extent, a reality check about how the system is currently performing and what the capacity is for performance. Participants should also include representation from groups that have an interest in the justice system—city/county government budget officers and managers, health/mental health treatment providers, etc. The community and the media can also be important stakeholders to include as, ultimately, it is through these groups that performance is communicated and legitimacy is established. The point is that for performance measures to have validity (not necessarily in the statistical sense), they must be meaningful for others who judge the performance of the system.
Jurisdictions may wish to consider engaging an outside facilitator with experience in performance measurement to provide guidance and assistance through the process. Local universities are an excellent resource for finding this kind of assistance.
Instructions
To develop and implement performance measures, the stakeholders identified above should undertake four key steps:
- Identify the goals and objectives of the system under the EBDM framework.
- Determine what the key indicators of output and outcomes are and what type of data collection will be required.
- Begin the collection and analysis of the performance measures.
- Implement a reporting mechanism for communicating performance to stakeholders.
Detailed guidance for each of these steps is provided below.
The first step for articulating performance measures is to define what is meant by “optimum performance,” i.e., establishing harm reduction goals and objectives for the criminal justice system. Several questions can help focus the discussion on what the jurisdiction hopes to accomplish:
The answers to these questions need to then been articulated in terms of quantifiable goals and objectives. It is important to understand that goals and objectives are not synonymous. Goals represent the desired end result of the system. Objectives define the short-term indicators that demonstrate progress toward goal attainment and that describe who or what will change, by how much, and over what period of time. For example, broadly stated, one goal might be that the recidivism rate be no higher than 20%. An objective might be a 5% annual decrease in the percentage of offenders who commit new offenses in a three-year period.
Another important consideration in defining goals and objectives is adherence to the SMART principle:
Once goals and objectives have been defined, the stakeholders should compare them to the impacts and outcomes identified in the system-level logic model. Each goal and objective should align with the intended impacts and outcomes articulated in the logic model. Although there does not need to be complete overlap, there should be no contradictions.
The second step in defining performance measures encompasses a number of activities:
Well-articulated goals and objectives should lend themselves nicely to the identification of key indicator data. Using the worksheet in Appendix 1, jurisdictions will need to “break down” the goals and objectives into specific types of data that can be collected. Using the example from Step 1 above, the table below shows the goal, the objective, and the types of indicator data that are needed to measure performance:
Goal |
Objective |
Indicator Data |
Our jurisdiction will have a recidivism rate of less than 20%. | 5% annual decrease in the percentage of offenders who commit new offenses in a three-year period |
|
As indicator data are being identified, jurisdictions should note if the data already exist; if so, they should identify who “owns” the data and, if not, they should determine whether the capacity for obtaining the data exists. To the extent that data is not already being collected or the capacity to collect the data does not exist, consideration should be given to the relative importance of the indicator. This next step in the process will help refine the list of performance measures.
An ideal performance measurement system must be manageable; as such, the number of performance measures for each goal and objective should be limited. Generally, there should be no more than three or four measures per goal or objective and, in fact, there may be fewer. Jurisdictions should aim to select those measures that are the strongest indicators of performance for which data already exist or for which the capacity for the data to be collected is in place. In refining the list, it is important to consider the following seven questions:
The question of performance targets is a particularly important one and requires more than a simple “yes/no” answer. As the list of measures is refined, jurisdictions should begin thinking in terms of what the specific performance targets should be. In other words, what is the “magic number” that demonstrates optimum performance? For example, if the intent is to implement pretrial risk assessments in order to decrease jail operating costs, the performance target might be that 90% of release decisions are consistent with assessment results. The logic model may provide some guidance in answering this question.
Because performance measurement is an ongoing process, it is important to have a well-defined data collection plan in place prior to the actual collection of data. As shown in Appendix 2, the data collection plan should include the following:
Once the data collection plan has been agreed upon by the key stakeholders and the agencies/persons that will be responsible for collecting the data, the jurisdiction should collect baseline data for each performance measure against which progress can later be measured.
It is rare that the data in raw form will be sufficient for assessing performance; quantitative analysis of the data is generally needed. The quantitative analysis will require basic statistical calculations such as ratios, percentages, percent change, and averages (mean, median, or mode). In some instances, depending on the measures selected, more complex statistics will be necessary and may require the involvement of persons with statistical analysis experience. Employees in the city/county manager’s offices may be resources, or even employees within criminal justice agencies that have analysis units. Local universities are also good resources for statistical analyses.
Once the performance data is collected and analyzed, it should be reported to stakeholders in a clear and easily understood manner. Although there is no wrong or right way to report data, the following list of reporting formats should be considered:
Jurisdictions should also establish a regular mechanism for communicating and discussing performance that includes target dates for the release of information. Possible mechanisms include
Sample Measures
The actual performance measures selected by the jurisdiction should be reflective of the goals and objectives that the stakeholders have identified as part of the EBDM initiative. The following list of possible performance measures are provided for illustrative purposes only:
- XX% of low risk arrestees cited and released
- XX% of defendants screened with a pretrial risk assessment tool
- No more than XX% cases resulting in deviations for pretrial release from risk assessment results
- XX% of jail beds occupied by low risk defendants awaiting adjudication
- XX% of defendants/offenders with low risk assessment scores placed in diversion programs
- Risk assessment information provided to judges in XX% of cases
- XX% of cases in which sentencing conditions align with assessed criminogenic needs
- XX% of offenders placed in interventions specifically addressing assessed criminogenic needs
- XX% of offenders who commit new offenses in a three-year period
- XX% of victims who report satisfaction with the handling of their cases
- Identify the goals and objectives of the system under the EBDM framework
- How will the jurisdiction benefit as a whole (i.e., what are the intended harm reduction outcomes)?
- How will the criminal justice system benefit from the movement to an EBDM-based system?
- What is an EBDM system intended to achieve or produce?
- What significant changes does the jurisdiction expect from the implementation of EBDM in terms of system operation?
- How will the costs to operate the system change?
- How will case processing change at point of entry into the system, during the adjudication process, while in corrections, and/or at point of release?
- How will those in the system (victims, witnesses, and defendants) view the process?
- How will EBDM impact those working in the system?
- What types of information will convince you and others (including the public and funders) that the system is operating at an optimum level?
- What types of information will convince you and others that the system is achieving what it is intended to achieve?
- Be Specific.
- Make them Measurable (i.e., quantifiable).
- Be Action-oriented.
- Be Realistic.
- Articulate a Time in which the change will occur.
- Determine what the key indicators of output and outcomes are and what types of data collection will be required
- determining what the key indicator data are for each goal and objective;
- identifying where, or if, the data exist and, if not, whether the capacity exists for capturing the data;
- refining the list of performance measures to represent a set of key indicators; and
- establishing performance targets.
- Is the indicator logical and directly related to goals?
- Is the indicator easy to understand (i.e., would a reasonable person agree that the indicator accurately represents what it is intended to measure)?
- Can the indicator be monitored regularly?
- Is the data necessary for measurement readily available?
- Can the indicator be measured against a specific benchmark (i.e., is there a baseline against which performance can be assessed)?
- Is the performance indicator quantified and measurable?
- Can specific performance targets be set for the indicator in question?
- Begin the collection and analysis of the performance measures
- data source: the name of the agency/person responsible for collecting the data and, if the data is already being collected, the name of the report or system from which the data is drawn; and
- frequency of data collection: how often the data will be collected.
- Implement a reporting mechanism for communicating performance to stakeholders
- Whenever possible, use graphic displays such as tables, bar charts, pie charts, or line charts.
- In graphic displays, provide legends and labels to clearly identify the information.
- Take care not to present too much information in a single graphic display.
- Use short narrative descriptions to help the audience interpret the data.
- Present both the performance measure (the target) (e.g., risk assessments provided to judges in 90% of cases) and the actual score (risk assessments provided to judges in 75% of cases).
- Provide context for the interpretation that might include discussion of why performance targets were or were not met, how the current performance period compares to previous performance periods, or what recommendations for performance improvement can be made.
- publication of a “scorecard,””report card,” or “dashboard”;2
- monthly, quarterly, or annual reports; and/or
- performance meetings with stakeholders.
Tips
- In deciding on the final list of performance indicators, make sure they are the best indicators of performance related to the specific goal or objective. Don’t “settle” on the easy indicators; instead, work toward a set of indicators that will provide the most compelling evidence of performance.
- Make sure that indicators are clearly defined (e.g., how is recidivism being defined, or what constitutes a case—a defendant, a charge, or a case number?) and that there are specific guidelines in place for their collection. Refer to Appendix 3 for a list of definitions that you might draw from or at least use as a starting place for the development of your own definitions. It does not matter whether you use the provided definitions or definitions of your own. What matters is that your team agrees that these are the right terms and agrees on their meanings.
- Consider “pilot testing” the performance measures by doing a preliminary data collection, analysis, and reporting to ensure that the data is interpreted consistently and that the performance measures actually measure what they are supposed to.
- When data is being collected from multiple sources, consider the use of Memoranda of Understanding (MOUs) or some other form of agreement to ensure that it will be collected and reported in the manner specified and within the established time frames.
- Use the performance measures to inform decision making. Where performance is lacking, dig deeper to understand why optimum performance is not being met and then make the appropriate adjustments.
1 Satisfaction can be measured on different levels but generally represents the satisfaction of justice system “consumers” such as victims, witnesses, and defendants. However, in certain instances, it may be desirable and important to measure satisfaction among those working in the justice system.
2 For more information on developing a scorecard, see 6b: Developing a Systemwide Scorecard.
Examples
Milwaukee County, Wisconsin, EBDM Initiative Monthly Project Dashboard (A Work in Progress)
Milwaukee County, Wisconsin, Harm Reduction Goals and Objectives
Additional Resources and Readings
Boone, H. N., Jr., & Fulton, B. (1996). Implementing performance-based measures in community corrections (NCJ 158836). National Institute of Justice Research in Brief. Retrieved from http://www.ncjrs.gov/pdffiles/perform.pdf
Boone, H. N., Jr., Fulton, B., Crowe, A. H., & Markley, G. (1995). Results-driven management: Implementing performance-based measures in community corrections. Lexington, KY: American Probation and Parole Association.
Bureau of Justice Statistics. (1993). Performance measures for the criminal justice system. Retrieved from http://www.bjs.gov/content/pub/pdf/pmcjs.pdf
Dillingham, S., Nugent, M. E., & Whitcomb, D. (2004). Prosecution in the 21st century: Goals, objectives, and performance measures. Alexandria, VA: American Prosecutors Research Institute.
Hatry, H. P. (2007). Performance measurement: Getting results. Washington, DC: Urban Institute Press.
Hoyle, B. (2011). Dashboards help lift the ‘fog of crime.’ Retrieved from http://www.theomegagroup.com/press/articles/dashboards_help_lift_the_fog_of_crime.pdf
National Center for State Courts. CourTools. Retrieved from http://www.courtools.org/
National Research Council. (2003). Measurement problems in criminal justice research. Washington, DC: National Academies Press.
Pennsylvania Commission on Crime and Delinquency: Office of Criminal Justice Improvements. Criminal justice performance measures literature review calendar years: 2000 to 2010. Retrieved from http://www.pccd.pa.gov/Pages/Default.aspx#.VrC7bmYo7cs
Rossman, S. B., & Winterfield, L. (2009). Measuring the impact of reentry efforts. Retrieved from http://cepp.com/wp-content/uploads/2015/12/Measuring-the-Impact-of-Reenty-Efforts.pdf
Appendix 1
Performance Indicator Worksheet
Appendix 2
Data Collection Plan Worksheet
Appendix 3
Sample Glossary of Criminal Justice Terms
PDF/Printer Friendly Version of Section
Activity 6: Establish performance measures, determine outcomes, and develop a system scorecard
EBDM Starter Kit
Activity 6: Establish performance measures, determine outcomes, and develop a system scorecard
What Do We Mean by Outcomes?
“Outcomes,” under a risk reduction model, are defined as:
- decreases in the rate or severity of reoffense by offenders,
- decreases in the harm caused to communities as a result of crime,
- increases in the level of satisfaction with the justice system by victims, and
- increases in the level of public confidence in the justice system.
Performance measurement facilitates an objective, empirical evaluation of the effectiveness of the justice system in achieving desired outcomes; it also facilitates an evaluation of the effectiveness of change strategies in contributing to those outcomes. Development of a systemwide scorecard—and reaching agreement on the methods to measure performance on scorecard items—ensures common agreement on the team’s desired outcomes. It also provides a tool for engaging and educating professionals and community members around the goals and activities of the justice system.
Elements of an EBDM justice system include
- a set of agreed-upon performance measures that will enable an objective, empirical evaluation of the effectiveness of justice system agencies in achieving their vision;
- benchmarks against which longer-term outcomes can be measured;
- methods to collect and analyze data on an ongoing basis to inform policy and practice; and
- a systemwide scorecard.
You may scroll to the next activity in order by clicking on the link at the bottom right of the page. Or you may skip to a specific activity by clicking on a link below:
5a: Building Logic Models
EBDM Starter Kit
5a: Building Logic Models
Navigating the Roadmap
Activity 5: Develop logic models.
Introduction
The development and use of a logic model is a critical step in understanding how evidence-based decision making (EBDM) will operate in a specific jurisdiction. A logic model helps lay out the shared understandings of what resources are available, what activities and changes will occur, what these activities and changes will produce, and what the intended long-term impacts of the initiative will be. The result of building a logic model is a picture that outlines the initiative’s theory of change, with a road map of what steps need to be taken in order to produce the desired impacts.
Logic models have six main components:
- inputs, or resources, which represent the existing resources (both financial and human), policies, practices, facilities, and capabilities that a jurisdiction has in place to support the implementation of EBDM;
- activities, which represent the specific strategies to be undertaken and implemented;
- outputs, which specify the immediate results that occur as activities and strategies are implemented (e.g., changed policies and practices, adoption of new tools/protocols, number of people trained, number of cases in which risk assessments are administered);
- outcomes, which serve as indicators that change is occurring at key decision points in the justice system as a result of the activities and which demonstrate that EBDM has been implemented at the system, agency, and case levels; and
- impacts, which define the types of long-term results that are anticipated and that can be measured as a result of implementing EBDM.
The Logic Model as a Motivator
“The logic model, approached with integrity, encourages its developers to stretch their imagination and be accountable to their vision. For us, the logic model forced us to think harder and to be more specific about the results and how to measure them. It denied us the option of settling for platitudes or unquantified commitments.”
–Policy Team Member, Milwaukee County, Wisconsin
In addition, because the logic model is intended to be a roadmap, the contextual conditions need to be considered. Contextual conditions represent the environment in which the local justice system operates and can include political, economic, social, cultural, or other factors.
For the EBDM initiative, the logic model should reflect implementation and desired change at the system level (i.e., a system logic model). The model that each jurisdiction develops will incorporate the resources, activities, etc. that are currently being used to reach the identified harm reduction goals as well as the new activities that are being planned.
Purpose
Building a logic model has two purposes:
- It helps facilitate the planning process by providing a mechanism for linking assumptions about how EBDM will work and the intended causal relationships between activities and impacts.
- It provides a tool for managing the implementation and evaluation of EBDM activities. Because EBDM can, and should, be implemented at multiple levels, separate logic models should be developed to represent EBDM at the system and agency levels.
The system-level logic model will provide an overall picture of the types of systemic activities and policy changes that will need to occur in order to achieve the jurisdiction-wide impacts that are expected with regard to harm reduction. The purpose of the agency-level logic model is to provide each entity and agency in the justice system with a plan for what activities the agency will need to undertake to move toward EBDM, what the outputs of these activities are, and how these will impact the stakeholders’ overall goal of harm reduction.
In addition to providing a graphic illustration of the causal relationship between activities and impacts, the component parts of the model also provide a sense of temporal order. In other words, the logic model can be used to show what activities or outputs need to occur before others can begin.
Participants
Initial work on the logic model—deciding what the jurisdiction hopes to accomplish—is a group discussion, ideally among the policy team. After these decisions have been reached, staff internal to the agency(ies), usually with some background in conceptualizing, planning, and implementing policy or program initiatives; staff with similar backgrounds from colleague agencies or county administration; or outside experts can develop the logic models, with input from the policy team. The instructions below assume that staff within agencies in the jurisdiction will develop the logic models.
Instructions
In general, the approach to developing a logic model—whether for the overall system or for an individual agency—is to work as a group to answer several critical questions related to what is hoped will be accomplished. The team discussion should result in answers to the following questions:
For the system model:
- Why do you want to move toward an EBDM-based system? How will the jurisdiction benefit from an EBDM-based system (i.e., how will harm to jurisdictions be reduced)?
- What significant changes do you expect from the implementation of EBDM in terms of system operation?
- What types of information will convince you (and others, including the public) that positive change has occurred?
- What are the possible unintended consequences, both positive and negative, of implementing EBDM?
- What contextual (e.g., social, political, economic) conditions might facilitate or hinder your ability to achieve the types of impacts you’ve identified for both the system and the jurisdiction overall?
For the agency model:
- What do you hope to accomplish as a result of implementing EBDM?
- What outcomes does your agency need to achieve in order to contribute to the systemic impacts identified above?
- What significant changes will occur within your agency as a result of the implementation of EBDM?
- What types of information will convince you (and others) that you are achieving the outcomes that you’ve defined?
- What are the possible unintended consequences, both positive and negative, of implementing EBDM?
- What contextual (e.g., social, political, economic) conditions might facilitate or hinder your ability to achieve the types of impacts you’ve identified?
The answers to these questions form the basis for two of the logic model’s component parts: impact(s) and contextual conditions.
Logic models are built from right to left—first you define the impacts, then the outcomes and outputs, followed by activities, and then the inputs. Contextual conditions are defined last or in tandem with the other components because they help you identify other factors that might need to be considered in order to achieve the intended results.
Good Impact and Outcome Statements
An example of a well-defined SMART impact is the following:
“75% of jail beds will be occupied by high risk offenders by 2013.”
A well-articulated outcome has the same characteristics as an impact. An example of a good outcome statement is the following:
“The number of offenders who successfully complete their sentence or treatment will increase by 75% within one year.”
An easy way to think about the development of a logic model is to think in terms of “if…then…” statements. For example, if we want to achieve these harm reduction impacts (e.g., reduced costs), then we will need to accomplish these outcomes (e.g., cost-saving measures). If we want to achieve these outcomes, then we will need to accomplish these outputs (e.g., number of low risk offenders diverted from the system). If we want to produce these outputs, then we will need to implement a specific activity or set of activities (e.g., pretrial risk assessment tool). And finally, if we want to implement this activity, then we will need to draw on these types of inputs/resources (e.g., funding to purchase a risk assessment instrument).
The following instructions offer step-by-step guidance on the development of logic models.
Step-by-Step Instructions:
- Using the logic model table in Appendix 1, list the intended impacts and outcomes and define them according to the SMART principle:
- Be Specific.
- Make them Measurable (i.e., quantifiable).
- Be Action-oriented.
- Be Realistic.
- Articulate a Time in which the change will occur1.
- Define what short-term accomplishments (outputs) will be needed in order to produce the intended outcomes and impacts. For example, if your jurisdiction expects that a certain number of joint policy decisions will be adopted, then two outputs might be the number/percentage of meetings attended by each policymaker and the number of policy decisions discussed.
- For each output identified, define the activity that will produce it. For example, if the output is to have 100% of probation officers trained in the use of motivational interviewing techniques, then the activity might be to implement a motivational interviewing training program.
- As you define which activities will be implemented, make a list of available resources, including financial, human, and existing materials and policies, that will be used to facilitate implementation of the activities. Make note of resources that might be lacking, and consider adding activities to the model that would either produce the resources or develop the capacity needed.
- Once you complete the logic model table, make a list of the contextual conditions that are external to the justice system but that have an impact on its operation and ability to implement the planned activities or to achieve the desired outcomes and impacts.
- The next step is to transfer the contents of the logic model table to a logic model diagram. Laying out the diagram of the logic model will require additional consideration of how all the defined elements are logically related to each other; it may identify areas where the logic is flawed and additional work is required. The logic model diagram will also help identify any gaps that need to be filled.
- Appendices 2 and 3 illustrate a basic logic model structure, representing the inputs, activities, outputs, outcomes, and impacts of two specific strategies that might be part of a site’s implementation plan.
- Use the checklist in Appendix 4 to assess the quality of the draft logic model. Members of the policy team (or managers/line personnel in an agency) and others not involved in the development of the logic model should complete the checklist.
- Revise and finalize the logic model as required.
Defining Your Activities: The Logic Challenge
Often there are preconceived ideas (because of funding opportunities, political will, or other reasons) about the specific activities that should be implemented. Be careful and realistic about the extent to which these activities will actually produce the intended outputs, outcomes, and impacts. As an example, consider the jurisdiction that wanted to decrease the amount of drug crime across the city. To do this, they decided to implement a truancy prevention program in one elementary school. By going through the process of linking activities, outputs, outcomes, and impacts, they would have readily highlighted the disconnect in the causal logic (i.e., what is the likelihood that a program at one elementary school will impact drug crime across the entire city?).
Tips
- Logic models should be built during the planning process of the initiative to maximize their utility as a planning, management, and evaluation tool.
- Horizontal arrows between components represent causal links; vertical arrows within components generally represent temporal order.
- It may be useful to label each piece of information (i.e., each input, each activity, etc.) in the logic model table to make the transfer to the logic model diagram easier. One suggestion is to assign the first input the number “1.” Assign the number “1a” to the activity that is related to that input, “1b” to the output associated with the activity, and so forth. In the event that two or more elements flow from the previous one, then number these elements in a way that depicts their temporal order.
- Logic models are not static. The logic model that you are preparing represents what you think and want to happen, not what will happen. The logic model should be thought of as a working model that you will periodically revisit and update as you move toward implementation.
- To the extent the logic model consists of both activities that are already in place in support of the identified impacts and those that are being planned as part of the initiative, it may be useful to color code the planned activities to make clear the action items for moving forward (e.g., use black for components in place and red for proposed or new components).
1 See also 6a: Measuring Your Performance and 6b: Developing a Systemwide Scorecard for more information on developing SMART goals and objectives.
Example
Yamhill County, Oregon Logic Model
Additional Resources and Readings
W. K. Kellogg Foundation. (2004). Logic model development guide. Retrieved from http://ww2.wkkf.org/DesktopModules/WKF.00_DmaSupport/ViewDoc.aspx?fld=PDFFile&CID=281&ListID=28&ItemID=2813669&LanguageID=0
OJJDP. (n. d.). Performance measures: Logic model. Retrieved from http://www.ojjdp.gov/grantees/pm/logic_models.html
The Pell Institute. (2011). Using a logic model. Retrieved from http://toolkit.pellinstitute.org/evaluation-guide/plan-budget/using-a-logic-model/
CEPP. (2009). Measuring the impact of reentry efforts. Retrieved from http://cepp.com/wp-content/uploads/2015/12/Measuring-the-Impact-of-Reenty-Efforts.pdf
Appendix 1
Logic Model Development Template
Appendix 2
Sample of Partial System-Level Logic Model for Pretrial Risk Assessment
Appendix 3
Sample of Partial System-Level Logic Model for Using Risk Assessments to Inform Plea Negotiations
Appendix 4
PDF/Printer Friendly Version of Section
Activity 5: Develop logic models
EBDM Starter Kit
Activity 5: Develop logic models
The development of a logic model is the critical next step in building a clear and specific understanding of how your system of evidence-based decision making (EBDM) will operate in the future. It is built upon—and in service of—a vision for the justice system, and it is informed by a careful analysis of current policy and practice in the context of evidence-based research. It reflects both the current inputs (i.e., resources) and activities that are supportive of the desired outcomes and reflective of the areas of advancement the policy team has identified. The logic model, therefore, describes currently available resources, activities that will be retained and those that will be changed or added, the outputs that these activities and changes will produce, and their intended long-term impacts.
The result of building a logic model is a picture that describes your theory of change—a local roadmap of the steps that need to be taken in order to produce your jurisdiction’s risk and harm reduction goals. A logic model provides a tool for managing the implementation and evaluation of EBDM activities.
Elements of an EBDM justice system include sound and testable system-level logic models.
4c: Becoming a Better Consumer of Research
EBDM Starter Kit
4c: Becoming a Better Consumer of Research
Navigating the Roadmap
Activity 4: Understand and have the capacity to implement evidence-based practices.
Introduction
The EBDM Initiative seeks to help local policy teams find and understand evidence-based knowledge about effective justice practices and to design more effective responses to defendants and offenders.1 Many stakeholders already know how to find and use research; others will appreciate these tips regarding how to quickly access reliable research and how to review and understand the findings and their applications. The evidence or empirical studies will be drawn from many fields: evidence-based practices in criminal justice, behavioral health interventions, organizational development, leadership and management, effective collaboration processes, and cost–benefit analyses.
Purpose
Broadly speaking, the goal of this document is to increase policy officials’ and practitioners’ skills in finding the research that matters and in understanding and translating empirical findings for their use in improving policy and practice. Specifically, this document offers
Defining EBP
“Evidence-based practice is the use of direct, current empirical evidence to guide and inform effective and efficient decision making and supervision practices.”
- tips for finding research relevant to critical questions about evidence-based practice;
- a list of searchable databases on criminal justice topics; and
- advice on how to review and assess the quality of the findings in academic articles and the research literature.
Participants
This document was developed for EBDM policy teams, their work groups, and agency practitioners to enhance their ability to find and understand the best available research that may be applied to criminal justice problems and proposed solutions.
Instructions
Step 1: Look in the Right Places to Find the Evidence that Matters
Where should the discerning consumer begin the search for evidence-based policies and programs and answers to specific research questions? The answer is three-fold: the Web, written literature, and experienced colleagues from your local and state criminal justice systems and from national networks of professionals.
Websites that Filter the Information for You: Evidence-Based Program Databases
Websites designed specifically to summarize research in one or more criminal justice practice areas are an excellent place to begin the search for information on effective programs and policies. A growing number of government agencies, academic institutions, and professional groups maintain these databases as a service to criminal justice professionals and the public. These organizations
- formulate evaluation criteria for assessing the strength of research findings;
- employ experts to review multiple studies of research on programs in a single area; and
- indicate which programs are shown to be effective (and at what level of rigor or confidence).
Some of these websites specialize in “systematic reviews” (also called meta-analytic reviews) of the literature regarding specific research questions and program areas. As the Center for Evidence-Based Crime Policy at George Mason University explains, systematic reviews “summarize the best available evidence on a specific topic using transparent, comprehensive search strategies to find a broad range of published and unpublished research, explicit criteria for including comparable studies, systematic coding and analysis, and often quantitative methods for producing an overall indicator of effectiveness.”2
A partial list of evidence-based program databases in criminal justice follows:1
A Website Caution
The consumer of website research summaries should be careful to not take the information at face value. Definitions as to what constitutes evidence, methodological soundness, and robust findings can vary significantly.
Furthermore, researchers do not always agree on what can be concluded from a research study. While some website authors make transparent attempts to give the user an accurate description of research findings, it is up to the user to exercise judgment. It is recommended that the user seek corroborating information to increase confidence in the relative strength of the research and its implications.
- The Campbell Collaboration, The Crime and Justice Coordinating Group (CCJG) is an international network of researchers that prepares and disseminates systematic reviews of high-quality research on methods to reduce crime and delinquency and to improve the quality of justice. www.campbellcollaboration.org/crime_and_justice/
- The Center for the Study of the Prevention of Violence, University of Colorado, maintains a website, Blueprints for Violence Prevention, on evaluated programs to prevent adolescent violence, aggression, and delinquency. www.colorado.edu/cspv/blueprints/
- George Mason University’s Center for Evidence-based Crime Policy offers a number of services, including systematic reviews, research on crime and place, and a summary (matrix) of evidence-based policing practices. http://gunston.gmu.edu/cebcp/
- Substance Abuse and Metal Health Services Administration’s (SAMSHA) National Registry of Evidence-based Programs and Practices (NREPP) provides a database of more than 190 interventions supporting mental health promotion, substance abuse prevention, and mental health and substance abuse treatment. www.nrepp.samhsa.gov
- U.S. Department of Justice, Office of Justice Programs’ Crime Solutions’ website provides research on program effectiveness; easily understandable ratings (effective, promising, and no effects) that indicate whether a program achieves its goal; and key program information and research findings. www.crimesolutions.gov
Websites that Provide Bibliographic Databases
These websites, which provide a listing of hundreds of studies, are often maintained by government agencies and universities. Prominent among these in the criminal justice field are the following:
- The National Criminal Justice Reference Service (NCJRS), supported by the U.S. Department of Justice, Office of Justice Programs. https://www.ncjrs.gov/
- The National Institute of Corrections Information Center. www.nicic.org
- Correctional Services of Canada. http://www.csc-scc.gc.ca/text/rsrch-eng.shtml
Websites that Provide Summaries of Research and Practical Guidance
Some universities, state criminal justice agencies, and professional organizations also run websites that summarize the research on effective criminal justice practice and/or provide guidance to users. While not as extensive as bibliographic databases, these websites focus their publications on the critical issues of most concern to policymakers and practitioners. A partial list follows:
- Center for Evidence-Based Crime Policy. http://gemini.gmu.edu/cebcp
- Correctional Treatment Evaluations, Texas Christian University, Institute for Behavioral Research. This national research center for addiction treatment studies in community and correctional settings provides access to over 700 resources on its website. http://www.ibr.tcu.edu/
- National Implementation Research Network. This website contains research on the successful implementation of new processes within organizations and systems. http://nirn.fpg.unc.edu/
- Stanford University, Evidence-Based Management. This website specializes in evidence directly related to the management of agencies. http://www.evidence-basedmanagement.com/
- University of Cincinnati School of Criminal Justice. This university-based site contains a number of research studies regarding the use of evidence in correctional interventions. http://www.uc.edu/ccjr/reports.html
- Washington State Institute for Public Policy. This website contains a number of helpful studies on what is or is not an effective intervention for reducing recidivism and costs. It is perhaps best known for its cost–benefit studies. http://www.wsipp.wa.gov/
Your Colleagues
Often an efficient way to check out the results of web-based and library searches is to ask experienced colleagues in your state and local jurisdiction and in national networks for recommendations regarding the latest and most reliable research. This strategy helps triangulate or hone in on the best studies.
Further, when identifying a journal article that appears useful but for which a subscription is required, contact colleagues at nearby colleges and universities and inquire about their ability to access the article from their library and provide a single copy for your review. (Be careful to not copy, distribute, or otherwise violate copyright laws.)
An increasing number of states support websites that summarize evidence-based research and practical guidance that is directly relevant to their criminal justice constituents and agencies. The websites may be hosted by a state criminal justice agency or university. Your colleagues will know how to access these sites.
Step 2: Evaluate Research Quality
What criteria should be used to decide if program evidence has been collected and analyzed according to high quality research standards? As Hess and Savadsky (2009) emphasize in their article “Evaluation Research for Policy Development,” all evidence is not created equally. Familiarity with a few key concepts can help policymakers wade through the growing body of information and make better-informed decisions about what is reliable. Following are a few tips about how to read the research literature and evaluate its quality:4
The Implementation Challenge
Replication of a well-researched, effective intervention in your jurisdiction depends on prescriptions being followed precisely and implemented correctly when repeated at scale. This is not the time to add your own “unique stamp” to the approach. If you do, it should be evaluated to determine if the change improved results.
- Understand the target population of the study and consider its relationship to the target population under consideration in your jurisdiction. Pay attention to sample size and sample selection. In general, larger samples provide more reliable data; however, there is no one hard and fast rule about sample size. The sample size may vary according to the purpose of the study, overall population, sampling error, and so forth.
- Consider the context. What works in one place or for one population may not work for another (e.g., a study completed in a small, rural state with unique characteristics may not be applicable to a large, densely populated state with a different offender profile and justice system challenges). In addition, the context of one study cannot necessarily be transferred to other settings. An often-quoted study examined successful program results and found that 15% of the outcome was derived from the intervention itself (e.g., cognitive program, didactic intervention, or therapeutic community) and 30% from the working alliance with the individual providing the service.5 However, the study was not carried out with correctional clients. The results could be valid across populations but until that hypothesis is tested, caution must be exercised about its applicability to the correctional population.
- Be cautious about assertions of causality. Correlation does not mean causation; an intervention may be related to a certain outcome but may not be responsible for that outcome. For example, a significant portion of many communities’ offender population includes individuals with mental illness. A common assumption is that mental health treatment will reduce the likelihood of reoffense among this population. However, while a mental health condition should be treated, studies have shown that mental health treatment alone is unlikely to reduce recidivism.
- Recognize that changes in implementation can change the outcomes of an intervention. For instance, an effective probation intervention that relies on officers proficient in motivational interviewing, case planning, and problem solving with clients may not work as well if delivered by staff who do not possess these skills.
- Be sure the conclusions follow logically from the reported findings. The summaries or conclusions of some studies can be deceptive or take license in explaining the implications of findings. Consumers should look for research that “measures the impact of particular interventions on identifiable populations under controlled circumstances.”6 These studies offer prescriptive guidance about actions that can be consistently replicated elsewhere.
All Research is Not Created Equal
“The golden rule here is to recognize that everything promoted as ‘research’ is not equally reliable or useful.”
– Hess & Savadsky, 2009
- The issue of confidence in results is important. The research consumer needs to know if the results of the intervention are “statistically significant.” This refers to the likelihood that a result is caused by something other than mere chance. In general, a 5% or power p-value is considered statistically significant. While policymakers may not want to dig through the statistical results’ section in great detail, it is useful to check whether the article mentions that the findings are statistically significant. Other issues such as whether the person(s) conducting the research study has a vested interest in the outcome of the study and whether the study was replicated elsewhere should also be considered.7
1 In Appendix 3 of the Framework for Evidence-Based Decision Making in Local Criminal Justice Systems, the Initiative provides a matrix of research findings on reducing pretrial misbehavior and offender recidivism. EBDM policy teams are encouraged to review this resource; however, the EBDM Research Matrix can only provide a snapshot of the research at one point in time, as new research is continually conducted. Therefore, this Starter Kit document is intended to provide EBDM policy teams with additional guidance on how to keep current with the research on EBDM.
2 http://cebcp.org/systematic-reviews/ Center for Evidence-Based Crime Policy. http://cebcp.org/
3 Adapted from Fink, 2008.
4 Adapted from Hess & Zavadsky, 2009.
5 Wampold, 2001.
6Hess & Zavadsky, 2009.
7 See Hess & Zavadsky (2009) for more information on how to be a good consumer of research.
Additional Resources and Readings
Hess, F. M. & Savadsky, H. (2009). Evaluating research for policy development. Institute for Public School Initiatives. The University of Texas at Austin.
Fink, A. (2008). The research consumer as detective: Investigating program and bibliographic databases. Practicing Research: Discovering Evidence that Matters (pp. 33–64). Retrieved from http://www.sagepub.com/upm-data/19270_Chapter_2.pdf
Wampold, B. E. (2001). The great psychotherapy debate: Models, methods, and findings. Mahwah, NJ: Lawrence Erlbaum Associates.
PDF/Printer Friendly Version of Section