AI GOVERNANCE AND ASSURANCE 

AI can accelerate insight, but the risks scale with it.

Organisations across private capital, philanthropy, international development and humanitarian response are deploying AI faster than their governance frameworks can keep pace. We help you close that gap – with practical controls, documented risk management, and an evidence trail that stands up to scrutiny.

PRIMARY AUDIENCE

High-Accountability AI adopters:

IMPACT INVESTORS & DEVELOPMENT FINANCE

Using AI to assess portfolio performance, ESG outcomes, or development impact — where LP accountability and regulatory exposure mean governance failures carry real consequences.

FOUNDATIONS & GRANT-MAKERS

Evaluating AI-funded programmes or governing AI use in your own research and evaluation workflows — and needing independent assurance that findings are defensible to boards and donors.

FAMILY OFFICES & PRIVATE WEALTH

Deploying AI in investment research, client reporting, or back-office workflows — with sensitive data, high reputational stakes, and no internal governance capacity to match the pace of adoption.

OUR SIX-STAGE PROCESS

AI you can open up and show anyone

We help organisations deploy AI with clear safeguards — defined responsibilities, documented risk controls, and an evidence trail you can defend. We do this through data protection, clear guardrails, and documentation aligned with the EU AI Act, NIST AI Risk Management Framework and GDPR.

1

Scope & risk triage

Clarify the AI system, context of use, stakeholders, and what harm would look like in practice.


Scope note · risk hypotheses · priority use-case list

2

Map the system & data

Document end-to-end: data sources, models, prompts, vendors, integrations, and human review points.


System map · data flow diagram · RACI draft

3

Define controls & guardrails

Translate risks into concrete controls -- policies, process checks, human oversight, access controls, logging.


Controls checklist · governance operating model

4

Test & stress-test

Test against realistic scenarios: bias and fairness concerns, hallucination and error modes, privacy exposure, misuse risk, failure handling.


Test plan · results summary · remediation backlog

5

Document for defensibility

Produce the evidence trail that makes responsible use auditable and explainable -- internally and externally.


Assurance memo · compliance mapping · decision log

6

Embed & improve

Support rollout and continuous assurance: training, KPI monitoring, periodic reviews, supplier management.


Training pack · monitoring dashboard · review cadence

+

Optional: independent audit mode

Where you need clear separation between implementation and oversight, we deliver this as an independent review with documented findings and recommendations. No system changes are performed by us.

FREE RESOURCES

Start informed. Everything you need before your first conversation with us

BEFORE YOU USE AI

Before you run evaluation data through an LLM, know your risk level and what safeguards to put in place. This free checklist covers the full evaluation workflow: transcription, translation, analysis and drafting.

AI IN RESEARCH & EVALUATION:
124 pRACTITIONERS ON RISK, USE AND GOVERNANCE

In January 2026, we surveyed 124 M&E practitioners. Most use AI. Most are unaware of the frameworks that govern it. Download the full results.

Ready to make your AI use defensible?

Book a 15-minute call to see how we can strengthen your AI governance and reduce risk in real-world use.