How mature is your Data & AI organization?Take the diagnostic
All use cases

AI USE CASE

Drug Repurposing via Knowledge Graphs

Identify existing approved drugs for new therapeutic indications using ML and biomedical knowledge graphs.

Typical budget
€80K–€400K
Time to value
20 weeks
Effort
16–52 weeks
Monthly ongoing
€5K–€20K
Minimum data maturity
advanced
Technical prerequisite
ml team
Industries
Healthcare
AI type
knowledge graph ml

What it is

This use case applies knowledge graph reasoning and machine learning to map relationships between existing approved compounds, disease mechanisms, protein targets, and clinical evidence — surfacing non-obvious repurposing candidates. By reducing early-stage discovery cycles, pharma and biotech R&D teams can shorten time-to-candidate by 30–50% compared to de novo discovery. Repurposed drugs also carry reduced safety uncertainty, lowering late-stage attrition risk. Teams typically see a ranked shortlist of viable candidates within weeks of deploying the pipeline against curated biomedical databases.

Data you need

Curated biomedical knowledge bases (e.g. UniProt, DrugBank, OMIM), internal compound assay data, clinical trial outcomes, and published literature in structured or extractable form.

Required systems

  • data warehouse

Why it works

  • Partner computational biology team with domain-expert medicinal chemists who can sanity-check graph-derived hypotheses.
  • Use well-maintained public knowledge bases (DrugBank, ChEMBL, OpenTargets) as the foundation before layering proprietary data.
  • Define a clear triage protocol specifying how many candidates proceed to experimental validation and at what evidence threshold.
  • Implement an iterative feedback loop where experimental results update the graph embeddings and model weights continuously.

How this goes wrong

  • Knowledge graph built from poorly curated or outdated biomedical sources, leading to spurious associations and low-quality candidates.
  • Insufficient in-house biology and ML expertise to validate and interpret model outputs, causing candidates to be dismissed or missed.
  • No wet-lab or clinical validation loop integrated into the pipeline, so computational predictions are never tested experimentally.
  • Scope creep into full de novo generative chemistry, inflating cost and delaying delivery of repurposing insights.

When NOT to do this

Do not pursue this use case if your organisation lacks wet-lab capacity or CRO partnerships to validate computational predictions — without experimental follow-through, the output is an expensive ranked list that drives no decisions.

Vendors to consider

Sources

This use case is part of a larger Data & AI catalog built from 50+ enterprise transformation programs. Take the free diagnostic to see how it ranks against your specific context.