Modern statistics & ML
Causal inference, hierarchical models, time-series, and supervised/unsupervised learning — selected for the question, not for the headline.
How we frame, run, and document a piece of work — from first conversation to delivery and the audit log that follows it.
Standard across consulting, research, and AI engagements. Stages 2 and 3 are where most consultancies skip — we don’t.
| Stage | What we do | What you receive |
|---|---|---|
| 01 · FRAME | Translate the question into a researchable problem. Surface assumptions, scope, and stakeholders. | Brief + decision-mapping memo |
| 02 · DESIGN | Pick the methods — quantitative, qualitative, or mixed — and pre-register the analysis. | Methodology memo + analysis plan |
| 03 · COLLECT | Gather data ethically: surveys, administrative records, scraped sources, partner data. | Versioned datasets + provenance log |
| 04 · ANALYZE | Run the analysis. Stress-test alternative specifications. Bound results by uncertainty. | Reproducible code + model artefacts |
| 05 · INTERPRET | Translate findings into scenarios and decisions, separating evidence from advocacy. | Insight report + scenario tree |
| 06 · DELIVER | Hand-off, with capacity-transfer for operators. We document what we did, not just what we found. | Documented system + audit trail |
Borrowed from academic and applied research traditions. Not optional.
Causal inference, hierarchical models, time-series, and supervised/unsupervised learning — selected for the question, not for the headline.
Notebooks, scripts, and configs versioned in source control. A second analyst can re-run any deliverable.
Consent, anonymization, and minimal-data principles. Reviewed against applicable regulation per jurisdiction.
Browse case studies, or send us a brief and we’ll scope it the same way.