Trust Charter / Transparency report
Report the numbers, not just the intentions.
Every quarter, we publish a short report. It covers how the Trust Charter is doing in practice — who is approving what, how often AI drafts are edited before they ship, what models are running in production, and what incidents happened.
This page is the committed baseline. The first report will land in the next cycle.
What we will publish
Every quarter. Same metrics. No cherry-picking.
Named-approver coverage
Share of shipped compliance artefacts (policies, evidence items, questionnaire answers, vendor change requests) that have a named human approver on record.
100%
AI-draft edit rate
Share of AI-drafted artefacts that were edited by a human before approval. A directional signal against templated boilerplate.
Report as-measured
Median time to human review
Median elapsed time from AI-draft creation to human approval across shipped artefacts.
Report as-measured
Cross-customer similarity score
Platform-wide content-similarity score for approved policies, summarised across customers. Published as a range, never per-customer.
Report as-measured
Expired evidence flagged
Number of evidence items that passed their expiry date and were surfaced for re-review by the platform.
Report as-measured
Observation-window violations caught
Number of control-attestation windows that were flagged by the platform for evidence gaps before reaching an auditor.
Report as-measured
AI models in production
List of AI models in customer-serving production paths during the quarter, with versions.
Disclose
Security + privacy incidents
Count and summary of any security or privacy incidents affecting customer data during the quarter.
Disclose
What the report isn't
The rules we set for ourselves.
- — We will publish the report even when the numbers are unflattering.
- — We will keep the same metric definitions across reports. If we need to change one, we explain why and publish the old definition alongside.
- — We will not publish per-customer numbers. Only platform-wide aggregates.
- — We will disclose every customer-serving AI model in production, by name and version.
- — We will disclose every security and privacy incident. Silence is not a metric we value.