EBDM Starter Kit

6a: Measuring Your Performance

Navigating the Roadmap

Activity 6: Establish performance measures/outcomes/system scorecard.

Introduction

Performance measures are tools for managing the performance of an agency, organization, or even a system. Performance measures provide benchmarks about whether or not optimum performance by the criminal justice system (and the entities within it) is being realized and, more importantly, whether the system is achieving what it intends to achieve under the evidence-based decision making (EBDM) framework. The use of performance measures provides a way to understand quantitatively the business processes, products, and services in the justice system. In a nutshell, performance measures help inform the decision making process by ensuring that decisions are based on clearly articulated and objective indicators. Moreover, undertaking and institutionalizing performance measurement throughout the criminal justice system allows policy discussions and decisions to be “data-driven,” which in turn helps build the foundation for additional local evidence about what works.

In general, performance measures for the justice system fall into four categories:

  • Effectiveness and the extent to which the intended outcomes are being produced
  • Efficiency measures that demonstrate whether maximum outcomes are being produced at minimum cost
  • Measures of satisfaction and quality to assess if the right processes are being used and the degree to which there is “satisfaction” with the processes1
  • Timeliness in terms of the extent to which activities or processes occur within predefined time limits

Performance measurement is often confused with program evaluation because both attempt to capture quantitative information about desired goals and outcomes. Some key differences should be noted. First, program evaluation involves the use of specific research methodologies to answer select questions about the impact of a program. Performance measurement, on the other hand, is simply the articulation of performance targets and the collection/analysis of data related to these targets. Second, program evaluation is designed to establish causal relationships between activities and observed changes while taking into account other factors that may have contributed to or caused the changes. On the other hand, performance measurement simply provides a description of a change, but cannot be used to demonstrate causality. Third, program evaluations are usually one-time studies of activities and outcomes in a set period of time, whereas performance measurement is an ongoing process.

As you begin the process of defining performance measures, there are seven rules that need to be kept in mind. Performance measures should be

  • Logical and related to goals
  • Easy to understand
  • Monitored regularly
  • Readily accessible
  • Based on specific benchmarks
  • Quantified and measurable
  • Defined with specific performance targets

Purpose

This starter kit is designed to help jurisdictions understand performance measures and to provide a guide for the development and implementation of performance measures systemwide. Information about the key steps in performance measurement is provided in addition to sample performance measures. It is important to note, however, that performance measures should be locally defined and driven; as such, the sample measures may or may not be relevant in a specific jurisdiction, depending on the focus of the local initiative. Finally, tips are offered for the implementation and use of performance measures.

Using Information Dashboards To Make Law Enforcement Decisions

Law enforcement has long understood the importance of routine performance measurement. By using the “dashboard” approach—that is, putting a spotlight on key information on a routine basis—law enforcement agencies around the country are using data to assess performance and adjust activities based on key outcome measures.

Police Chief Bence Hoyle, of Cornelius, North Carolina, states that such dashboards should

  • identify and disseminate information about criminal activity to facilitate rapid intervention;
  • identify and disseminate information about crime to assist in long- and short-term strategic solutions;
  • allow agencies to research the key incident data patterns, such as modus operandi, repeat offender locations, or other related information, such as traffic stops near the scene, so suspects can quickly be identified;
  • provide data on the effectiveness of specific tactics, in near real-time, through focused views; and
  • support the analysis of workload distribution by shift and geographic area.

For more information, see “Dashboards Help Lift the ‘Fog of Crime'” at http://www.theomegagroup
.com/press/articles/
dashboards_help_lift
_the_fog_of_crime.pdf
.

Participants

Development of performance measures should involve a variety of stakeholders. At a minimum, the leadership of the various components of the justice system, along with some line level representatives, should be part of the process. The leadership can provide the broad systemic perspective about how the system should be performing under an EBDM initiative and how each agency/entity within the justice system contributes to overall system performance. The inclusion of line personnel, however, provides a different level of detail and, to some extent, a reality check about how the system is currently performing and what the capacity is for performance. Participants should also include representation from groups that have an interest in the justice system—city/county government budget officers and managers, health/mental health treatment providers, etc. The community and the media can also be important stakeholders to include as, ultimately, it is through these groups that performance is communicated and legitimacy is established. The point is that for performance measures to have validity (not necessarily in the statistical sense), they must be meaningful for others who judge the performance of the system.

Jurisdictions may wish to consider engaging an outside facilitator with experience in performance measurement to provide guidance and assistance through the process. Local universities are an excellent resource for finding this kind of assistance.

Instructions

To develop and implement performance measures, the stakeholders identified above should undertake four key steps:

  • Identify the goals and objectives of the system under the EBDM framework.
  • Determine what the key indicators of output and outcomes are and what type of data collection will be required.
  • Begin the collection and analysis of the performance measures.
  • Implement a reporting mechanism for communicating performance to stakeholders.

Detailed guidance for each of these steps is provided below.

The first step for articulating performance measures is to define what is meant by “optimum performance,” i.e., establishing harm reduction goals and objectives for the criminal justice system. Several questions can help focus the discussion on what the jurisdiction hopes to accomplish:

The answers to these questions need to then been articulated in terms of quantifiable goals and objectives. It is important to understand that goals and objectives are not synonymous. Goals represent the desired end result of the system. Objectives define the short-term indicators that demonstrate progress toward goal attainment and that describe who or what will change, by how much, and over what period of time. For example, broadly stated, one goal might be that the recidivism rate be no higher than 20%. An objective might be a 5% annual decrease in the percentage of offenders who commit new offenses in a three-year period.

Another important consideration in defining goals and objectives is adherence to the SMART principle:

Once goals and objectives have been defined, the stakeholders should compare them to the impacts and outcomes identified in the system-level logic model. Each goal and objective should align with the intended impacts and outcomes articulated in the logic model. Although there does not need to be complete overlap, there should be no contradictions.

The second step in defining performance measures encompasses a number of activities:

Well-articulated goals and objectives should lend themselves nicely to the identification of key indicator data. Using the worksheet in Appendix 1, jurisdictions will need to “break down” the goals and objectives into specific types of data that can be collected. Using the example from Step 1 above, the table below shows the goal, the objective, and the types of indicator data that are needed to measure performance:

Goal

Objective

Indicator Data

Our jurisdiction will have a recidivism rate of less than 20%. 5% annual decrease in the percentage of offenders who commit new offenses in a three-year period
  • Total number of offenders committing new offenses within three years
  • Total number of offenders released

As indicator data are being identified, jurisdictions should note if the data already exist; if so, they should identify who “owns” the data and, if not, they should determine whether the capacity for obtaining the data exists. To the extent that data is not already being collected or the capacity to collect the data does not exist, consideration should be given to the relative importance of the indicator. This next step in the process will help refine the list of performance measures.

An ideal performance measurement system must be manageable; as such, the number of performance measures for each goal and objective should be limited. Generally, there should be no more than three or four measures per goal or objective and, in fact, there may be fewer. Jurisdictions should aim to select those measures that are the strongest indicators of performance for which data already exist or for which the capacity for the data to be collected is in place. In refining the list, it is important to consider the following seven questions:

The question of performance targets is a particularly important one and requires more than a simple “yes/no” answer. As the list of measures is refined, jurisdictions should begin thinking in terms of what the specific performance targets should be. In other words, what is the “magic number” that demonstrates optimum performance? For example, if the intent is to implement pretrial risk assessments in order to decrease jail operating costs, the performance target might be that 90% of release decisions are consistent with assessment results. The logic model may provide some guidance in answering this question.

Because performance measurement is an ongoing process, it is important to have a well-defined data collection plan in place prior to the actual collection of data. As shown in Appendix 2, the data collection plan should include the following:

Once the data collection plan has been agreed upon by the key stakeholders and the agencies/persons that will be responsible for collecting the data, the jurisdiction should collect baseline data for each performance measure against which progress can later be measured.

It is rare that the data in raw form will be sufficient for assessing performance; quantitative analysis of the data is generally needed. The quantitative analysis will require basic statistical calculations such as ratios, percentages, percent change, and averages (mean, median, or mode). In some instances, depending on the measures selected, more complex statistics will be necessary and may require the involvement of persons with statistical analysis experience. Employees in the city/county manager’s offices may be resources, or even employees within criminal justice agencies that have analysis units. Local universities are also good resources for statistical analyses.

Once the performance data is collected and analyzed, it should be reported to stakeholders in a clear and easily understood manner. Although there is no wrong or right way to report data, the following list of reporting formats should be considered:

Jurisdictions should also establish a regular mechanism for communicating and discussing performance that includes target dates for the release of information. Possible mechanisms include

Sample Measures

The actual performance measures selected by the jurisdiction should be reflective of the goals and objectives that the stakeholders have identified as part of the EBDM initiative. The following list of possible performance measures are provided for illustrative purposes only:

  • XX% of low risk arrestees cited and released
  • XX% of defendants screened with a pretrial risk assessment tool
  • No more than XX% cases resulting in deviations for pretrial release from risk assessment results
  • XX% of jail beds occupied by low risk defendants awaiting adjudication
  • XX% of defendants/offenders with low risk assessment scores placed in diversion programs
  • Risk assessment information provided to judges in XX% of cases
  • XX% of cases in which sentencing conditions align with assessed criminogenic needs
  • XX% of offenders placed in interventions specifically addressing assessed criminogenic needs
  • XX% of offenders who commit new offenses in a three-year period
  • XX% of victims who report satisfaction with the handling of their cases
  1. Identify the goals and objectives of the system under the EBDM framework
    • How will the jurisdiction benefit as a whole (i.e., what are the intended harm reduction outcomes)?
    • How will the criminal justice system benefit from the movement to an EBDM-based system?
    • What is an EBDM system intended to achieve or produce?
    • What significant changes does the jurisdiction expect from the implementation of EBDM in terms of system operation?
      • How will the costs to operate the system change?
      • How will case processing change at point of entry into the system, during the adjudication process, while in corrections, and/or at point of release?
      • How will those in the system (victims, witnesses, and defendants) view the process?
    • How will EBDM impact those working in the system?
    • What types of information will convince you and others (including the public and funders) that the system is operating at an optimum level?
    • What types of information will convince you and others that the system is achieving what it is intended to achieve?
    • Be Specific.
    • Make them Measurable (i.e., quantifiable).
    • Be Action-oriented.
    • Be Realistic.
    • Articulate a Time in which the change will occur.
  2. Determine what the key indicators of output and outcomes are and what types of data collection will be required
    • determining what the key indicator data are for each goal and objective;
    • identifying where, or if, the data exist and, if not, whether the capacity exists for capturing the data;
    • refining the list of performance measures to represent a set of key indicators; and
    • establishing performance targets.
    • Is the indicator logical and directly related to goals?
    • Is the indicator easy to understand (i.e., would a reasonable person agree that the indicator accurately represents what it is intended to measure)?
    • Can the indicator be monitored regularly?
    • Is the data necessary for measurement readily available?
    • Can the indicator be measured against a specific benchmark (i.e., is there a baseline against which performance can be assessed)?
    • Is the performance indicator quantified and measurable?
    • Can specific performance targets be set for the indicator in question?
  3. Begin the collection and analysis of the performance measures
    • data source: the name of the agency/person responsible for collecting the data and, if the data is already being collected, the name of the report or system from which the data is drawn; and
    • frequency of data collection: how often the data will be collected.
  4. Implement a reporting mechanism for communicating performance to stakeholders
    • Whenever possible, use graphic displays such as tables, bar charts, pie charts, or line charts.
    • In graphic displays, provide legends and labels to clearly identify the information.
    • Take care not to present too much information in a single graphic display.
    • Use short narrative descriptions to help the audience interpret the data.
    • Present both the performance measure (the target) (e.g., risk assessments provided to judges in 90% of cases) and the actual score (risk assessments provided to judges in 75% of cases).
    • Provide context for the interpretation that might include discussion of why performance targets were or were not met, how the current performance period compares to previous performance periods, or what recommendations for performance improvement can be made.
    • publication of a “scorecard,””report card,” or “dashboard”;2
    • monthly, quarterly, or annual reports; and/or
    • performance meetings with stakeholders.

Tips

  • In deciding on the final list of performance indicators, make sure they are the best indicators of performance related to the specific goal or objective. Don’t “settle” on the easy indicators; instead, work toward a set of indicators that will provide the most compelling evidence of performance.
  • Make sure that indicators are clearly defined (e.g., how is recidivism being defined, or what constitutes a case—a defendant, a charge, or a case number?) and that there are specific guidelines in place for their collection. Refer to Appendix 3 for a list of definitions that you might draw from or at least use as a starting place for the development of your own definitions. It does not matter whether you use the provided definitions or definitions of your own. What matters is that your team agrees that these are the right terms and agrees on their meanings.
  • Consider “pilot testing” the performance measures by doing a preliminary data collection, analysis, and reporting to ensure that the data is interpreted consistently and that the performance measures actually measure what they are supposed to.
  • When data is being collected from multiple sources, consider the use of Memoranda of Understanding (MOUs) or some other form of agreement to ensure that it will be collected and reported in the manner specified and within the established time frames.
  • Use the performance measures to inform decision making. Where performance is lacking, dig deeper to understand why optimum performance is not being met and then make the appropriate adjustments.

1 Satisfaction can be measured on different levels but generally represents the satisfaction of justice system “consumers” such as victims, witnesses, and defendants. However, in certain instances, it may be desirable and important to measure satisfaction among those working in the justice system.

2 For more information on developing a scorecard, see 6b: Developing a Systemwide Scorecard.


Examples

Milwaukee County, Wisconsin, EBDM Initiative Monthly Project Dashboard (A Work in Progress)

Milwaukee County, Wisconsin, Harm Reduction Goals and Objectives

Additional Resources and Readings

Boone, H. N., Jr., & Fulton, B. (1996). Implementing performance-based measures in community corrections (NCJ 158836). National Institute of Justice Research in Brief. Retrieved from http://www.ncjrs.gov/pdffiles/perform.pdf

Boone, H. N., Jr., Fulton, B., Crowe, A. H., & Markley, G. (1995). Results-driven management: Implementing performance-based measures in community corrections. Lexington, KY: American Probation and Parole Association.

Bureau of Justice Statistics. (1993). Performance measures for the criminal justice system. Retrieved from http://www.bjs.gov/content/pub/pdf/pmcjs.pdf

Dillingham, S., Nugent, M. E., & Whitcomb, D. (2004). Prosecution in the 21st century: Goals, objectives, and performance measures. Alexandria, VA: American Prosecutors Research Institute.

Hatry, H. P. (2007). Performance measurement: Getting results. Washington, DC: Urban Institute Press.

Hoyle, B. (2011). Dashboards help lift the ‘fog of crime.’ Retrieved from http://www.theomegagroup.com/press/articles/dashboards_help_lift_the_fog_of_crime.pdf

National Center for State Courts. CourTools. Retrieved from http://www.courtools.org/

National Research Council. (2003). Measurement problems in criminal justice research. Washington, DC: National Academies Press.

Pennsylvania Commission on Crime and Delinquency: Office of Criminal Justice Improvements. Criminal justice performance measures literature review calendar years: 2000 to 2010. Retrieved from http://www.pccd.pa.gov/Pages/Default.aspx#.VrC7bmYo7cs

Rossman, S. B., & Winterfield, L. (2009). Measuring the impact of reentry efforts. Retrieved from http://cepp.com/wp-content/uploads/2015/12/Measuring-the-Impact-of-Reenty-Efforts.pdf

Appendix 1

Performance Indicator Worksheet

Appendix 2

Data Collection Plan Worksheet

Appendix 3

Sample Glossary of Criminal Justice Terms

PDF/Printer Friendly Version of Section