Regularly collecting and analyzing information about services can yield important insights for community mediation centers. This information is valuable internally to inform the center about how to improve existing services and design new services. It is also valuable externally to make the case to stakeholders that they should support and use community mediation.
Collection and analysis of information can take the form of ongoing tracking of services, or it can entail a more in-depth, time-limited evaluation. Tracking mediation services helps centers to have an understanding of how their services are being used and facilitates the management of the volunteer mediators. It involves collecting and reviewing data such as numbers and types of cases referred and mediated, results of mediation, party assessment of mediators, and party satisfaction.
Evaluation digs a little deeper into the quality of a center’s services and what can be done to improve them. It generally answers more complex questions about the health and effect of program processes and client experience. Further evaluation gives the center the data needed to convince funders, referral sources and other stakeholders of the value of the services provided. If there are issues with those services, evaluation assures that those issues are being competently addressed.
Tracking and Evaluating Services
Tracking is generally done in-house, although an outside evaluator can help to set up the data collection system needed to do so. Evaluation can be done either in-house or by an independent evaluator. In-house evaluations can be sufficient when the questions and needed analysis are not complex. An independent evaluator should be consulted for more complex evaluations or when funders require an independent analysis of services.
What to Track
Ongoing tracking allows community mediation centers to answer important questions about their services. This tracking can provide answers to questions like:
- How many cases referred by the court ended up being mediated? How many referred cases used our services?
- What were the outcomes of mediations and other processes? Are those outcomes meeting the needs of our referrers?
- What do participants think about our services? How do they feel about their mediator? Are there any mediators who require intervention?
- Who are we serving? Do we need reach out to specific communities to get more individuals from them to use our services?
- Are trainees getting what we want them to get out of our training? Do we need to address any deficiencies?
Answering these questions requires the following basic data to be tracked:
- Number of referrals, including self-referrals. These should be categorized by the source and/or program involved (such as small claims court, prosecutor’s office, police, community or social service agency, etc.).
- Percentage of those referred who ultimately decide to use the center’s services. This is particularly important for those who call in to get more information about mediation or another process before deciding whether to move forward, as a low percentage may indicate a need to change how mediation is presented. (See “Overcoming Barriers to Mediation in Intake Calls to Services: Research-Based Strategies for Mediators” for more on this.) This, too, should be done by referral source and/or program.
- Number of mediations and other processes conducted, tracked by program.
- Outcomes of mediation or other processes, tracked by program. Outcomes include whether the process ended with an agreement, and may include a juvenile’s completion of an agreement for restorative justice programs.
- Participant experience. Surveying participants after an ADR process is about more than satisfaction. It also ensures that participants are being provided the benefits of the process and can help monitor mediator competence and safety. These surveys are difficult to do well. See the RSI/ABA Model Surveys for more information on how to use surveys effectively.
- Who the center is serving. Funders often want to know what percentage of clients is made up of minorities and/or those considered to be low-income.
For training and coaching, tracked data includes:
- Knowledge and/or skills gains made during training. For example, did participants believe they learned more about overcoming impasse? If the training includes simulations observed by trained coaches, they can assess whether the trainees have gained the desired skills. If not, then post-training surveys can be helpful, giving the participants the opportunity to indicate how much they learned.
- Participant assessment of the training and trainers.
What to Evaluate Over a Limited Time
The above items are those that should always be collected. The following are factors that can be useful for improving services and managing the center. They can answer questions such as: Do we need to change how we conduct intake? Is the court referring all appropriate cases to the center? Are our mediators following our protocols? Do our agreements include terms that address underlying needs and interests? Are they sustainable? Do our costs for a program warrant continuing the service?
- How well center processes are working for clients, stakeholders and staff. These include the court’s referral procedures, intake, reasons referred cases aren’t mediated, pre-mediation interviews, reporting to the court or other agency, mediator debriefing, etc.
- The percentage of possible cases/disputes that are referred to the center, such as the percentage of eligible cases referred by the court.
- Reasons mediation or other processes don’t end with a positive outcome, such as agreement or positive participant experience.
- Agreement terms, particularly to examine the creativity of agreements reached in mediation.
- Percentage of cases that have further disputes/are re-litigated, including cases that return to court for violation of agreement terms, further calls to the police in neighbor disputes, etc.
- Costs and time spent per case.
Overcoming Obstacles to Evaluation
The three main obstacles to conducting quality evaluations that community mediation centers face are often the same obstacles these centers face more generally: lack of time, money and specialized expertise. Following are some tips to help centers improve how they are collecting reliable information and analyzing it.
Ways to Keep Costs and Time Down
Start with Existing Systems
The first step any center should take is to assess what is being collected, then figure out what else needs to be added to the list. The next step is to determine what the current data collection system, whether a spreadsheet, a database or a full-blown case management system, can track and how much can be automated through that system. For example, creating pull-down lists in Excel.
Prioritize
It’s easy to go down a rabbit hole and want to collect every available piece of data or answer every question you have. This will, at best, lead the center to put off improving data collection and, at worst, lead it to throwing in the towel. It’s better to prioritize what information is most important. Is there a sense that a particular process isn’t working well? That mediators aren’t properly obtaining consent? Start with data that will answer those questions. Does a funder need to know whether agreements are being sustained? Focus on that question.
Don't Reinvent the Wheel
Centers don’t need to start fresh when developing data collection tools. There are model surveys and samples of other centers’ tools available online. These are not only samples to work from, but may also have been tested to be sure they work.
Use Technology
For those whose data collection system can’t do what they need it to do, there are low-cost or no-cost options. The more automated the system, the better. Google Forms can be used for mediators to enter information about their mediations, such as the outcome, who participated, the time it took, etc., however, care should be taken to ensure that the data remains secure. Google Forms or a low-cost online survey application can be used for post-mediation participant surveys. If online surveys aren’t possible, optical mark recognition (OMR) software allows for the paper completion of surveys, that can then be automatically read into a database via a scanner.
Look to Your Volunteer Mediators
Center volunteers have hidden talents. To uncover them, centers often mass email their volunteers. They should do the same for data collection or evaluation assistance.
Get an Intern to Help
A local college or law school can identify an intern to help with tasks such as:
- Making follow-up calls to determine if agreements have been completed
- Turning your paper and pencil surveys into online surveys
- Figuring out how to run reports from Survey Monkey or the ODR software
- Entering data
Limit Consultant Costs
Evaluators often provide unbundled services, such as adapting surveys, designing the evaluation system or analyzing data already collected. They also provide capacity building or coaching services that would help centers to be able to do evaluations internally.
There are IT professionals who provide pro bono or reduced-cost services. These are often available locally and can be found online.
Look to Other Available Resources
Local resources, such as the court or another community mediation center can provide expertise. The court may have ideas and/or IT personnel who can help. Another center may have already come up with solutions to address data collection and analysis. There are national resources that can help as well, such as NAFCM’s Clearinghouse (which is available only to NAFCM members) and RSI’s court ADR Resource Center.
Seek Special Funding
A regular funder might make an additional special gift to enable the hiring of an evaluator. There may be funders who are especially interested in making capacity-building grants, which would allow for the hiring of an evaluator to help design systems and train staff to properly analyze data and report on findings.
A Note About Research
Researchers and academics often look at broader questions about ADR services in general. Their focus is on what in general leads to better outcomes, such as whether caucusing before mediation is more effective than not caucusing, whether a particular screening tool is better at identifying intimate partner violence than another, or when the best time is for courts to refer cases. The primary focus of research is on generating new knowledge, not in determining whether a particular program is achieving its goals or needs to address issues with implementation in order to be more effective.
Community mediation centers can use research to determine the most effective design for a program or what to train mediators to do. They can also use research to convince others of the effectiveness of current or proposed services. But generally, centers will not undertake research on their own. They can, however, participate in research or partner with researchers who need access to cases and information.