The fastest-rising RFP requirement in the UAE financial services sector right now is not Open Banking compliance, not AML automation, not cloud migration — it's ISO/IEC 42001. The standard has gone from "interesting" in late 2024 to "expected by Q4 2026" in major bank procurement cycles, and CBUAE-supervised institutions are getting ahead of what increasingly looks like a sectoral overlay.
This post is the practical readiness checklist we use internally at Codenovai and the one we ship to financial services clients on our ISO 42001 Readiness offer. It's designed for the head of compliance, head of risk, or CTO who needs to scope the work before talking to a consultancy or assigning internal resources.
Why this matters more in financial services
The standard is sector-agnostic by design. The risk classification under it is not.
In financial services, the bar is higher because the actions agents take — approving credit, freezing accounts, flagging transactions, recommending products — are higher-consequence by definition. The EU AI Act framing (mirrored by Dubai's mandate guidance) treats most financial AI systems as "high-risk" by default, which triggers the heaviest evidentiary requirements: training data documentation, validation studies, fairness assessments, post-market monitoring at decision-level granularity.
CBUAE has been signalling for two cycles that AI-specific supervisory expectations are coming. The Sovereign Financial Cloud announcement (February 2026) made the data-residency dimension explicit. ISO 42001 is the cleanest framework that satisfies both internal governance needs and the sectoral overlays banks are now navigating.
The readiness checklist
Use this as a self-assessment before a kickoff call. If you can answer "yes, documented" to 60%+ of these, you're closer than you think. If you're under 30%, you're a green-field project and need to scope 12 weeks minimum.
1. AI Inventory (Annex A.6.2 territory)
- List every AI system in production — internal, customer-facing, embedded in vendor products
- Per system: purpose, owner, data classes processed, model used, deployment location
- Per system: business criticality and customer impact in case of failure
- Inventory is updated when new systems are deployed (not just point-in-time)
Most banks discover 4–10× more AI surface than they thought at this stage. The CRM has AI scoring. The fraud system has multiple models. The marketing platform has automated personalisation. Vendor-embedded AI is typically 60% of the inventory.
2. Risk Classification
- Risk methodology documented — what makes a system high vs limited vs minimal risk
- Each inventoried system classified
- High-risk systems have enhanced controls (validation, monitoring, human oversight)
- Classification is reviewed when system changes (model swap, scope expansion)
3. Training Data Governance
- Training data sources documented for any models you've trained or fine-tuned
- Data quality assessment performed pre-training
- Bias and fairness assessments documented for high-risk systems
- Data lineage traceable from production output back to training source
For banks using third-party foundation models (Claude, GPT, Gemini), this section applies to your prompt corpus and any RAG ground truth — not the foundation model itself, which is the vendor's responsibility.
4. Human Oversight
- For each high-risk system: who can override, how, with what authority
- Override actions are logged with reason
- Human reviewers are trained and named, not just defined as a role
- Escalation procedures exist for ambiguous cases the system can't decide
5. Monitoring and Eval
- Eval harness for each high-risk system with golden cases and adversarial cases
- Performance metrics tracked over time (not just at deploy)
- Drift detection — alerts when behaviour shifts beyond tolerance
- Cost telemetry per system (this isn't strictly ISO 42001 but auditors love it)
6. Incident Response
- AI-specific incident response procedure (different from generic IT incident response)
- Categories: model failure, drift, fairness incident, security breach affecting AI
- Escalation paths defined including notifying regulators where required
- Post-incident review template includes lessons-learned for the AIMS
7. Vendor Management
- AI-aware vendor due-diligence checklist (different from standard procurement)
- Contracts include AI-specific clauses (data use, model changes, audit rights)
- Periodic re-assessment cadence for embedded vendor AI
- Exit plans documented for AI-dependent systems
8. CBUAE Sovereign Cloud Alignment
- Workloads requiring sovereign deployment identified and mapped
- Data flows documented for any cross-border AI inference
- Documentation aligns ISO 42001 controls with CBUAE expectations
- Sovereign deployment path established for in-scope workloads
This last section is what makes UAE financial services materially different from EU or US banks pursuing ISO 42001. The CBUAE overlay is real and not optional.
The realistic timeline
For a mid-sized UAE bank or fintech (~50–500 employees, AI footprint ~5–15 systems):
| Phase | Duration | Output |
|---|---|---|
| Discovery & gap analysis | 2 weeks | Inventory, risk classification, gap report |
| Policy authoring | 4–6 weeks | AIMS policy framework, all required procedures |
| Controls implementation | 2–4 weeks | Eval harness, monitoring, audit trails for top-priority systems |
| Internal audit dry-run | 2 weeks | Findings register, remediation tracked |
| External audit | 4–8 weeks | Auditor-led, with our team supporting |
| Certificate issued | — | Total: ~6–9 months kickoff to certificate |
The bottleneck is rarely the policy work. It's the engineering changes — making eval-gated deploys real, instrumenting monitoring, backfilling audit trails for systems built without them.
Where most banks under-scope
Three places we consistently see underestimation:
-
Vendor-embedded AI inventory. The CRM, the marketing automation, the call analytics, the fraud platform — every major SaaS in your stack has AI features now. Inventorying these takes longer than inventorying your in-house systems.
-
Eval harness retrofitting. AI systems built before evals were a discipline (i.e., most things shipped before late 2024) need eval coverage built around them post-hoc. This is engineering work, not policy work, and it can run 4–8 weeks for an established system.
-
Auditor selection and prep. ISO 42001 is new enough that auditor capacity in the UAE is constrained. Engaging an audit body 60+ days before the planned audit window is now standard practice.
What to do this quarter
If you've read this far, you have one of three states.
-
State 1: Active interest, no scoping yet. Run the self-assessment. Identify the gap. Get a fixed-scope proposal — ours or any consultancy's — to size the work properly.
-
State 2: Mid-engagement. Use this checklist as a sanity check on coverage. The gaps tend to be in vendor-embedded inventory and eval harness depth.
-
State 3: Already certified. The next cycle (annual surveillance audit + scope expansion as you deploy new agents) is where most organisations slip. Maintain the rhythm; don't let it become a one-off project.
Where Codenovai fits
We deliver this work for UAE financial services clients on a fixed-price 8–12 week engagement starting from AED 110,000. The policy templates, AI inventory schema, and monitoring framework are structured to match what auditors actually look for — adaptable to your existing ISMS and sectoral overlays.
Book a scoping call or read the full ISO 42001 Readiness offer.
