Last updated: 24 February 2026
This document provides a general overview and does not constitute legal advice.
1. Overview
Stableridge uses automation-augmented workflows as decision support within defined engineering and advisory operating models. Automation is used to improve delivery speed, consistency, and traceability, while accountability remains with authorised Stableridge personnel and governance owners.
We do not position AI outputs as final operational, legal, or compliance determinations. Human judgement and review gates remain primary for production, security, and client-facing deliverables.
2. Where AI Is Used
AI tooling may be used in controlled internal workflows, including:
- code acceleration and implementation assistance;
- documentation drafting and structured editorial support;
- diagram generation for architecture communication;
- test scaffolding and coverage suggestions; and
- content drafting for marketing, insights, and internal knowledge artefacts.
Outputs from these workflows are reviewed by responsible team members before deployment, publication, or client issue.
3. Where AI Is NOT Used
- No autonomous production deployment without human approval.
- No unsupervised access-control or entitlement decisions.
- No automated legal determinations.
- No automated compliance sign-off.
Critical decisions remain under controlled approval paths aligned to role-based authority.
4. Human Review Gates
Stableridge applies layered review controls before release or operational use, including:
- peer code review and change approval;
- security testing and verification checks;
- dependency scanning and vulnerability triage; and
- manual release approval before production deployment.
5. Data Handling in AI Workflows
We do not intentionally submit client confidential data to public model endpoints without safeguards and approval. Where automation workflows are used, data minimisation and sanitisation controls are applied according to use case risk.
Vendor use is managed through contractual and operational controls where applicable, including service terms, access boundaries, and governance expectations.
6. Risk Controls
- logging and traceability across change and review workflows;
- version control discipline for code and policy artefacts; and
- separation of development, staging, and production environments.
These controls are designed to support consistent governance, incident analysis, and audit readiness.
7. Client Responsibilities
Clients remain responsible for:
- reviewing deliverables against their governance obligations;
- maintaining policy and approval ownership within their organisation; and
- ensuring lawful use of systems and outputs in their operating context.
8. Limitations
AI-assisted outputs can contain inaccuracies, incomplete context, or unsuitable suggestions. Professional judgement, testing, and documented review remain mandatory.
Except where expressly set out in contract, no representation or warranty is given that automation-assisted artefacts are complete, error-free, or fit for all purposes.
9. Changes to This Disclosure
We may update this disclosure as our operating model, tooling, and governance controls evolve. Updates are published on this page with a revised “Last updated” date.
10. Contact
For questions about this disclosure, please contact us via /contact.
Need governance context for your program?
We can provide a structured briefing on operational controls, review gates, and accountable AI usage patterns.