Blog

Evidence Graphs Explained: How Intervention-Outcome Data Powers Health Intelligence

Unfair Team • March 8, 2026

The supplement industry has a data structure problem. Most supplement databases are flat: a list of compounds, each with a description, a dose range, and maybe a set of tags. This structure works for catalog browsing. It fails completely for the questions that health-tech products actually need to answer.

Questions like: "Which supplements have the strongest evidence for improving sleep latency?" Or: "If I am already taking magnesium for sleep, what other interventions target the same outcome with independent mechanisms?" Or: "How does the evidence for L-theanine compare across anxiety, sleep, and cognitive performance outcomes?"

These are graph questions. They require traversing relationships between entities, not just looking up properties of a single entity. The data structure that answers them is an evidence graph.

What is an evidence graph?

An evidence graph is a data model that represents the relationships between interventions (supplements, compounds, treatment approaches) and outcomes (health conditions, biomarkers, symptoms, performance metrics) as typed, graded edges.

The graph has three core entity types:

Interventions are the nodes on one side. An intervention is a specific treatment approach: a supplement at a given dose range, a dietary pattern, or a behavioral protocol. Each intervention has properties: its category, its mechanism of action, and its aggregate evidence score across all linked outcomes.

Health outcomes are the nodes on the other side. An outcome is the measurable result: sleep latency, anxiety severity, LDL cholesterol level, grip strength, cognitive reaction time. Each outcome has properties: its category (disease, biomarker, symptom, performance), its source type, and the number of interventions linked to it.

Evidence edges are the connections between them. Each edge represents a specific claim: "Intervention X has effect Y on Outcome Z, supported by evidence of quality Q." The edge carries metadata: the evidence grade, the direction of effect (increase, decrease, no significant change), the population studied, the study type, and the source citation.

This is fundamentally different from a flat supplement database. In a flat database, a supplement "has" properties. In an evidence graph, a supplement "has evidence for" outcomes, and that evidence has its own properties.

Why flat databases fail

Consider a flat database entry for magnesium glycinate:

Name: Magnesium Glycinate
Category: Mineral
Dose: 200-400 mg/day
Benefits: Sleep, muscle relaxation, anxiety
Evidence: B

This entry answers basic questions. But it cannot answer:

The flat database encodes conclusions. The evidence graph encodes the evidence behind those conclusions. This distinction matters enormously when you are building products that need to explain their reasoning.

Anatomy of an evidence edge

The evidence edge is the most important element in the graph. Each edge encodes a structured claim with the following properties:

Intervention ID. The specific supplement or treatment approach. This is not a free-text name — it is a canonical identifier that resolves to a specific entity with its own properties.

Outcome ID. The specific health outcome. Again, a canonical identifier, not a tag.

Grade. A standardized assessment of evidence quality, typically on an A-through-D scale:

Effect direction. What the evidence shows: increase, decrease, no significant change, or mixed. "Magnesium decreases sleep latency" is a different claim from "magnesium increases sleep quality," even though both involve the same intervention and the same general domain.

Population. The characteristics of the studied population: age range, sex, health status, existing conditions. Evidence for creatine's cognitive effects in elderly populations does not automatically generalize to young athletes.

Source metadata. The study type (RCT, meta-analysis, observational, case series), the citation (PubMed ID, DOI), and the publication date. This allows downstream applications to show users exactly where a claim comes from.

Graph queries that power products

The value of an evidence graph is in the queries it enables. Here are the four query patterns that most health-tech products need.

Pattern 1: Outcome-first discovery

Query: Given a health outcome, rank all interventions by evidence strength.

Example: "Sleep latency" → L-theanine (B), magnesium glycinate (B), melatonin (A), valerian (C), tart cherry (C)

This is the most common entry point for recommendation engines and discovery interfaces. The user starts with a goal ("I want to sleep better") and the system returns the strongest-evidence interventions for that goal. The graph makes this a single traversal: find the outcome node, follow all incoming edges, sort by grade.

Pattern 2: Intervention profiling

Query: Given an intervention, show all outcomes it has evidence for, grouped and graded.

Example: "Ashwagandha" → Anxiety reduction (B), cortisol reduction (B), sleep quality (C), muscle strength (C), testosterone (C)

This powers supplement detail pages. Instead of a paragraph describing what ashwagandha "does," the product page shows a structured evidence profile: which outcomes have strong evidence, which have moderate evidence, and which are preliminary. The user can make an informed decision based on the strength of evidence for their specific goal.

Pattern 3: Comparative analysis

Query: Given two interventions, compare their evidence profiles across shared outcomes.

Example: "Alpha-GPC vs Citicoline" → Cognitive performance: Alpha-GPC (B) / Citicoline (B). Neuroprotection: Alpha-GPC (C) / Citicoline (B). Power output: Alpha-GPC (B) / Citicoline (D).

This powers comparison tools and recommendation logic. When a user is choosing between two supplements, the graph provides a structured basis for comparison rather than a subjective editorial opinion.

Pattern 4: Reverse evidence lookup

Query: Given an outcome, which interventions have evidence, and from the other direction, given an intervention, which outcomes does it share with another intervention?

This is the graph query that flat databases cannot support natively. It enables features like "People who took this supplement also benefited from..." recommendations, evidence-based cross-sell logic, and stack optimization tools that identify redundant or synergistic interventions.

Building on evidence graphs

For product teams considering an evidence graph integration, the key architectural decisions are:

Query granularity. Do you need to query individual edges (for building custom scoring models) or aggregated intervention profiles (for display)? Most products need both, which means your data access layer should support edge-level queries and materialized views.

Freshness requirements. Evidence graphs are not static. New studies change grades. New interventions are added. New outcomes are linked. Your integration should account for dataset versioning and change detection.

Population filtering. If your product serves specific populations (athletes, elderly, pregnant women), you need to filter evidence edges by population metadata. An evidence statement validated in young healthy males may not apply to postmenopausal women.

Explainability. The evidence graph's greatest product advantage is that it can explain its reasoning. "We recommend L-theanine for sleep because it has grade B evidence from 3 RCTs for reducing sleep latency." This explanation is not generated by an LLM — it is derived structurally from the graph. This makes it auditable, consistent, and defensible.

The evidence graph as competitive infrastructure

Supplement recommendation engines, clinical decision support tools, and health content platforms all need the same underlying capability: the ability to answer evidence questions about interventions and outcomes. The evidence graph is the data structure that makes those answers possible.

Flat databases served the previous generation of supplement products — simple catalogs with search and filter. The next generation requires traversal, comparison, and explanation. The evidence graph is what makes that possible.

The Unfair Library API provides a queryable evidence graph with 780+ health outcomes, 182 interventions, and 3,900+ graded evidence edges. Endpoints support outcome-first discovery, intervention profiling, reverse lookups, and edge-level queries with population and grade filters. Explore the API docs or contact us to integrate evidence graph data into your product.

Related

Evidence-First Supplement Prioritization

The supplement market runs on enthusiasm

How to Build a Supplement Safety Layer Into Your Health App

Every health app that surfaces supplement information faces the same liability question: what happens when a user sees a recommendation that conflicts with their medication? The answer is not to avoid supplement data entirely

Structured Supplement Data for E-Commerce: Beyond Marketing Copy

Most supplement product pages are built on marketing copy