1. Maturity Levels¶
1. Which Approach Fits Your Team?¶
Not every organisation is equally advanced on the AI journey. This model helps you determine which approach and which Collaboration Modes fit your current situation.
Maturity is not a fixed organisational label, but can differ per use case. The chosen approach and degree of oversight follow the risk and impact of the application, not the general maturity level of the organisation.
2. The Explorer¶
Organisations in the Explorer phase have just started with AI. There is enthusiasm and a desire to experiment, but little structure.
Characteristics¶
- Many pilots, little production: Multiple experiments are running, but few reach real users.
- Ad-hoc approach: Each team does it their own way. No standards.
- Low AI maturity: Limited knowledge of MLOps, governance and risk management.
- Opportunistic: Projects start based on enthusiasm, not strategy.
Challenges¶
- No clear ROI: Difficult to demonstrate the value of AI.
- Lack of reusability: Every project starts from scratch.
- Risk of "AI theatre": Lots of talking, little doing.
Recommended Collaboration Modes¶
- Mode 1 (Instrumental): Start with simple tools (ChatGPT, Copilot).
- Mode 2 (Advisory): Let AI make suggestions, human approves.
Next Steps¶
- Choose 1-2 use cases with high impact and low complexity
- Perform Data Evaluation (Access, Quality, Relevance)
- Build a Validation Pilot within 30 days
- Document what you learn in a simple Blueprint
Growth Guide for the Explorer¶
Entry criteria¶
Score yourself. 4 or more "yes" = you are in this profile.
- Fewer than 3 AI projects in production
- No formal AI governance process established
- AI decisions are made ad hoc without fixed criteria
- No designated AI PM or Guardian
- Most AI applications are SaaS tools without customisation (Copilot, ChatGPT)
Exit criteria (ready for Builder level)¶
- At least 2 use cases fully documented (Goal card + Validation report)
- Designated AI PM (even part-time)
- Guardian or compliance officer appointed
- Hard Boundaries established for all active systems
- At least 1 Gate Review completed in accordance with the Blueprint
Top-5 Actions for the Explorer¶
- Start with the Explorer Kit — 30-day plan with minimal overhead
- Appoint an AI PM — even part-time; creates ownership
- Document 1 existing use case using the Goal card
- Conduct a Risk Pre-Scan for every active AI system
- Establish Hard Boundaries for your most-used AI tool
Metrics¶
| KPI | Target |
|---|---|
| % use cases with Goal card | > 50% of active use cases |
| Number of Gate Reviews | ≥ 1 |
| Incidents without documented response | 0 |
| % employees with basic AI training | > 25% |
3. The Builder¶
Organisations in the Builder phase have proven that AI works, but struggle with the transition to stable production.
Characteristics¶
- Successful pilots: There are use cases that deliver value.
- The Production Gap: Difficult to move from experiment to production.
- Inconsistent quality: Some days it works perfectly, other days not.
- Unclear ownership: Who is responsible when something goes wrong?
Challenges¶
- Technical debt: Quick prototypes become production systems without refactoring.
- Lack of monitoring: No insight into Performance Degradation (drift).
- Scalability: What works for 10 users does not work for 1000.
- Governance vacuum: Unclear who decides on ethics and risks.
Recommended Collaboration Modes¶
- Mode 3 (Collaborative): Human and AI work together as partners.
- Preparation for Mode 4 (Delegated): Start with automated monitoring.
Next Steps¶
- Implement Specification-First Method (test-driven development)
- Set up Performance Degradation monitoring
- Formalise governance: define Hard Boundaries
- Invest in MLOps training for the team
- Document System Prompts (prompts, context) in version control
Growth Guide for the Builder¶
Entry criteria¶
- 3–10 AI projects in production
- AI PM and at least one Guardian appointed
- Gate Reviews are conducted but not always consistently
- Validation reports exist for most systems
- Basic drift monitoring process in place
Exit criteria (ready for Visionary level)¶
- All active AI systems have a complete documentation set (Charter, Goal card, Hard Boundaries, Validation report)
- Gate Reviews mandatory and always completed before go-live
- Formal incident response process tested
- Collaboration Mode recorded for each system
- AI governance committee or equivalent decision-making body active
Top-5 Actions for the Builder¶
- Standardise the 90-day roadmap as a mandatory starting point for every project
- Implement continuous drift monitoring for all Mode 3+ systems
- Train all AI PMs and Tech Leads in the Blueprint methodology
- Conduct a portfolio review — stop zombie projects
- Establish an AI governance committee with Sponsor, Guardian and AI PM
Metrics¶
| KPI | Target |
|---|---|
| % use cases with complete documentation set | > 80% |
| Average time Gate 1 → production | \< 13 weeks (Limited Risk) |
| Drift incidents without prior warning | \< 10% |
| % Mode 4+ systems with active monitoring | 100% |
4. The Visionary¶
Organisations in the Visionary phase have fully integrated AI into their strategy and operations. AI is business-as-usual.
Characteristics¶
- AI at scale: Multiple production systems running stably.
- Strategic integration: AI is part of the long-term vision.
- Mature governance: Clear roles, responsibilities and policy.
- Continuous optimisation: Focus on efficiency, cost and impact.
Challenges¶
- Complexity: Management of a fleet of AI systems.
- Ethical oversight at scale: How do you guarantee responsible AI with 100+ use cases?
- Cost control: Cloud and API costs can escalate quickly.
- Talent: Retaining specialised AI expertise.
Recommended Collaboration Modes¶
- Mode 4 (Delegated): AI executes independently, human manages exceptions.
- Mode 5 (Autonomous): For specific processes where full autonomy is acceptable.
Next Steps¶
- Implement automated compliance monitoring (EU AI Act)
- Establish AI Board or Ethics Committee
- Optimise costs: review cloud spending, model compression
- Develop reusable accelerators and templates
- Invest in energy efficiency (ESG goals)
- Build an AI Center of Excellence
Growth Guide for the Visionary¶
Entry criteria¶
- More than 10 AI systems in production
- Full AI governance committee active
- AI PM recognised as a formal discipline
- Standardised documentation for all systems
- AI integrated into core strategy
Exit criteria (mature AI organisation)¶
- AI governance is a boardroom topic with formal mandate
- External audits conducted annually (compliance, fairness)
- Organisation actively contributes to sector standards or policy
- AI risk management integrated into enterprise risk management (ERM)
- External knowledge sharing (publications, conferences, open source)
Top-5 Actions for the Visionary¶
- Build an AI platform — shared infrastructure for monitoring and governance
- Integrate AI risks into ERM — AI incidents are a boardroom KPI
- Launch an internal AI centre of excellence with a permanent Guardian role
- Participate in sector standards (e.g. ISO/IEC 42001, NIST AI RMF)
- Publish lessons learned — strengthens reputation and ecosystem
Metrics¶
| KPI | Target |
|---|---|
| % High Risk systems with external audit | 100% |
| Average MTTR for AI incident | \< 4 hours |
| AI ROI reported to board | Quarterly |
| External knowledge-sharing contributions | ≥ 2 per year |
5. Related Modules¶
- AI Collaboration Modes
- Quick Start: AI Project in 90 Days
- Accelerators
- Discovery & Strategy
- Development
- Management & Optimisation
- Compliance Hub
- Governance Model