Outcomes Taxonomy for Justice Support Networks (Cross-Org Results Funders Trust)

In the demanding world of the criminal justice system, a quarterly report is due, the intake queue is exploding, and

A large team creating an outcomes taxonomy for justice support networks

In the demanding world of the criminal justice system, a quarterly report is due, the intake queue is exploding, and someone asks the question that always lands hard: “So… how many people did we actually help, and what changed for them?”

In a justice support network, every partner has a real story. The problem is the stories don’t add up. One program reports “cases closed,” another reports “referrals made,” another reports “forms completed.” Each is a win for access to justice, but the numbers don’t match, and funders can’t compare results across the network.

That’s where outcomes taxonomy for justice support networks comes in. In plain language, an outcomes taxonomy is a shared framework that serves as a dictionary for results: what outcomes mean, what counts as proof, and how partners report them without extra staff burden. Example cross-org outcome: a client avoids eviction after coordinated legal help, court navigation, and a community referral. This post covers what funders trust, how to build a cross-org taxonomy, and how to roll it out with privacy in mind.

Key takeaways: outcomes taxonomy for justice support networks

  • Good looks like shared definitions that every partner, including those offering peer support services, can apply the same way, even with different tools.
  • Evidence rules matter (what counts as “verified,” what’s self-report, what’s excluded).
  • Comparability beats perfection; funders reward steady, consistent reporting of public safety metrics over time.
  • Avoid vanity metrics that feel busy but don’t show change for justice-involved individuals.
  • Don’t over-collect; collect once, use many times, and keep sensitive data minimal.
  • Set governance early, prioritizing equity and fairness (an owner, a review rhythm, and a change log).
  • Next step: pick 6 to 10 network-level outcomes, pilot with 2 to 3 partners, then expand.

What funders actually trust in cross-org outcomes reporting

Funders know justice work is complex, especially within the criminal justice system. They don’t expect a single number to explain the full mission, including coordinated help for opioid use disorder. What they do expect is a reporting system that doesn’t change its meaning every time a new partner joins.

The trust problem usually comes from three places:

First, different definitions. “Resolved” might mean “advice given” at one hotline, “negotiated agreement” at legal aid, and “court order issued” at a clinic.

Second, different workflows and tools. A navigator program might track outcomes in a spreadsheet, a court self-help center might use a ticketing system, legal aid might use case management software, and community-based support might rely on custom apps. The same client journey shows up as three separate stories.

Third, double counting. If two partners both count “eviction prevented,” funders start wondering whether the network is reporting progress, or just counting handoffs.

What tends to earn trust:

  • Consistency over time: the definition stays stable for a full reporting year.
  • Comparability across partners: the same outcome means the same thing across the network.
  • Clear attribution rules: who can claim credit, and when.
  • Anti-duplication controls: evidence that network results in the criminal justice system aren’t inflated by repeats.
  • Lived experience narratives: qualitative data that complements the taxonomy.

It helps to borrow the spirit of tiered measurement used in broader justice data efforts, where a small “Tier 1” set is feasible for most partners and “Tier 2” adds detail like treatment outcomes for those who can support it. The Justice Counts metrics approach is a useful reference point for this style of disciplined simplicity.

The difference between outputs, outcomes, and impact (with justice examples)

Think of a client journey like a relay race. Outputs are the handoffs, outcomes are the position gained, impact is where you finish.

Outputs are what you did. Examples: forms completed, calls answered, referrals sent.

Outcomes are what changed for the client in the near term. Examples: hearing attended, payment plan agreed, protection order filed and accepted.

Impact is the lasting change, often measured later. Examples: housing stabilized for 6 months, benefits maintained, repeat harm reduced.

Funders often accept outputs as operational proof. But treatment outcomes drive renewals and scaling decisions because they show real change, not just activity.

What “cross-org” means in practice: one client journey, many handoffs

A typical path might start with a hotline call, move to court self-help for forms, then to legal aid for representation, with a community partner handling emergency funds and linkage to services for opioid use disorder. It’s one client story, but many systems touch it.

Cross-org measurement fails when each partner counts the same step differently. Example: a navigator counts “eviction avoided” when the hearing is continued, while legal aid counts it only after a signed stipulation. Both are defensible, but together they create confusion and inflate totals.

A cross-org taxonomy fixes this by using shared outcome statements with boundaries: who counts it, when it counts, and what proof counts. Clear attribution rules cover who can claim credit for linkage to services, and when.

How outcomes taxonomy builds shared definitions that hold up to audits

The goal isn’t to turn your network into a research project. The goal is to make your reporting defensible, repeatable, and light enough that staff will actually do it.

A practical consulting approach usually looks like this:

Discovery: Interview a few partners, conduct a systematic review to map the client journey including reentry processes, and review the reports that keep causing fire drills. This is often tied to broader systems realities like fragmented tools and shadow spreadsheets (common across the sector, as described in technology challenges faced by legal nonprofits).

Draft taxonomy: Write outcome statements in plain language for outcome evaluation, with minimum required data and clear evidence rules. Start from existing task frameworks (like Legal Help Task Taxonomy concepts and task-based framing) so you aren’t inventing language from scratch.

Validation: Bring partners together to pressure-test definitions using real scenarios and implementation factors. This is where ambiguity dies.

Pilot: Run one reporting cycle with a small set of partners, then adjust.

Governance: Assign decision rights and set a rhythm so the taxonomy stays stable year to year.

This fits well inside a broader, staged plan like CTO Input’s technology roadmap process, because outcomes definitions only stick when workflows and data collection match reality.

Design the taxonomy: outcome statements, levels, and allowed evidence

A usable taxonomy includes:

  • Outcome name and plain definition
  • Time window (for example, “within 90 days of service”)
  • Eligibility rules (who counts, which case types count)
  • Evidence types (document, system event, verified self-report)
  • Exclusions (what does not count)

Tier it:

Level 1 (network outcomes): a small set every partner can report.
Level 2 (optional detail): extra outcomes for programs with capacity, such as addressing health-related needs or securing transitional housing.

Example outcome statements for civil justice support networks:

  • Eviction prevented (verified agreement, dismissal, or continued tenancy at 90 days)
  • Protection order obtained (temporary or final order entered)
  • Public benefits secured (approval notice or confirmed enrollment)
  • Debt issue resolved (settlement, dismissal, or verified payment plan in place)
  • Safe contact plan completed (documented plan and resource connection)
  • Immigration filing submitted (receipt notice or confirmed submission)

For a broader view of how shared outcome language can work across nonprofits, the Urban Institute’s work, summarized at BetterEvaluation, is a strong reference: The nonprofit taxonomy of outcomes.

Governance that keeps definitions stable (and still lets you learn)

Keep governance light, but real:

  • Outcomes owner: one accountable lead who manages versioning.
  • Quarterly review: approve small fixes, not constant rewrites.
  • Change log: what changed, when, and why.

Decision rights should be explicit: who can propose a definition change, who approves it, and when it goes live. Versioning matters because funders want year-over-year comparability, not shifting goalposts.

Implementation without extra drag: data mapping, privacy, and partner adoption

In the criminal justice system, implementation should feel like alignment, not a rebuild. The aim is “collect once, use many times,” so staff aren’t re-entering the same facts in three places.

A realistic pilot timeline is 6 to 10 weeks: Week 1 to 2 discovery, week 3 draft, week 4 validation, week 5 to 8 pilot setup and training, week 9 to 10 first reporting run and fixes.

Success after the first cycle looks like this: partners submit on time, definitions don’t spark debate, and the network can explain results without backtracking to enable data-driven decisions.

Crosswalks, not rebuilds: aligning different case systems to one outcomes language

Consulting work avoids ripping out systems. Instead, you build a crosswalk from each partner’s fields to shared outcome IDs, following interoperability standards to ensure seamless data alignment across diverse tools.

A simple example (plain text):

Partner field: “Disposition code = Dismissed”; maps to Outcome ID: “EVIC_PREVENTED_L1”; evidence: “court docket entry.”

These crosswalks accommodate varied systems touching the client journey, from peer navigators to behavioral health supports. For duplicates, use a network-level referral or client identifier strategy that respects privacy, incorporating social network support. Often that means separating identity from outcomes wherever possible and only sharing what’s needed to prevent double counting.

If you want this kind of work packaged as a service, it often sits inside offerings like network alignment and reporting resets (outlined in CTO Input’s products and services).

Protecting clients while reporting: simple privacy rules funders respect

Keep privacy non-negotiable, and practical:

  • Collect only what you need to support the outcome definition.
  • Limit free-text notes in shared reporting fields.
  • Define role-based access, including for partner staff.
  • Set retention windows, then follow them.
  • Use clear consent language when sharing across partners.
  • Separate identity from outcomes when you can.
  • Maintain a written incident response plan for partners and vendors.

Stop doing this: don’t build a shared spreadsheet that contains names plus sensitive outcomes. It’s fast in the moment and expensive later.

FAQs about outcomes taxonomy for justice support networks

How many outcomes should we start with?
Start with 6 to 10 Level 1 outcomes. More than that slows adoption.

How do we avoid double counting across partners?
Set attribution rules and use a network identifier strategy, even if it’s simple.

What if partners can’t collect the same data?
Partners providing peer support services or peer navigators may have varying data capabilities. Use minimum required fields for Level 1, then allow optional Level 2 detail specific to peer support services.

How do we balance qualitative and quantitative data, including insights from lived experience?
The taxonomy supports both: quantitative Level 1 metrics form the core, while Level 2 fields capture qualitative stories from lived experience mapped to outcomes.

How do we track long-term impacts such as recidivism reduction?
Link Level 2 indicators to Level 1 outcomes with scheduled follow-ups. This ensures reliable tracking of recidivism reduction across the network.

How do we show credit without unfair attribution?
Separate “network outcome achieved” from “partner contribution,” then report both.

How long does a pilot take?
Most networks can pilot in 6 to 10 weeks, incorporating a systematic review of current practices, if decision rights are clear.

What if a funder asks for different metrics?
Map their request to your taxonomy; for example, it handles social network support or opioid use disorder initiatives. Don’t rewrite your definitions every time.

Do we need new software?
Not at first. Crosswalks and reporting routines usually get you most of the way, including for treatment outcomes.

Conclusion

Funders don’t trust fancy dashboards in the criminal justice system. They trust clear definitions of opioid use disorder, treatment outcomes, and health-related needs; shared evidence rules for evidence-based treatment; and steady governance that keeps reporting comparable from year to year, all boosting access to justice.

If your network’s numbers don’t reconcile, it’s not a failure of effort. It’s a failure of shared language. Start small, set a network-level outcomes set for social network support and community-based support, pilot with a few partners, and protect clients by collecting less, not more.

If intake, handoffs, and reporting feel like a weekly scramble, agency leaders should book a clarity call to tackle resource allocation: https://ctoinput.com/schedule-a-call. Which single chokepoint, if fixed, would unlock the most capacity and trust next quarter for justice-involved individuals drawing on lived experience and improving linkage to services?

Search Leadership Insights

Type a keyword or question to scan our library of CEO-level articles and guides so you can movefaster on your next technology or security decision.

Request Personalized Insights

Share with us the decision, risk, or growth challenge you are facing, and we will use it to shape upcoming articles and, where possible, point you to existing resources that speak directly to your situation.