RFP7 min readMay 8, 2026

RFP Template: Sovereign AI + RAG for GCC Procurement

A working RFP template for UAE and GCC organisations procuring sovereign AI + RAG capabilities. Section-by-section requirements covering compliance, architecture, deployment, evaluation, and ongoing operations.

Technova Team

Expert Insights

Share:
RFP Template: Sovereign AI + RAG for GCC Procurement

If you're at a UAE or GCC organisation procuring sovereign AI capabilities — bank, insurer, family office, government-adjacent service, regulated healthcare, large enterprise — you're likely working from a procurement template that wasn't built for this category. Generic IT services RFPs miss data residency specifics. Generic AI RFPs miss compliance overlay. Generic regulated-procurement RFPs miss the model-and-architecture specifics that decide implementation success.

This post is a working RFP template, structured by section, that we'd recommend adapting for sovereign AI + RAG procurement. It draws from the Sovereign tier deployments we ship as part of our Sovereign AI + RAG offer and from procurement engagements where we've responded to RFPs that did and didn't get this right.

Use it directly. Edit it. Combine it with your existing procurement template. The goal is to make the procurement process find the right vendor, not to make our particular RFP win.

Section 1: Background and Objectives

State plainly:

  • The organisation, its sector, and its regulatory context (CBUAE-supervised, DHA-licensed, DIFC entity, ADGM, mainland, freezone)
  • The use case the AI system supports — be specific (customer service, document Q&A, analyst augmentation, fraud detection, etc.)
  • Why sovereignty matters for this workload — regulatory mandate, contractual obligation, internal policy, all three
  • The data classes the system processes (transaction data, customer financial data, medical records, legal documents, etc.) with sensitivity levels
  • Expected user count and concurrency
  • Latency tolerance for responses

This section anchors all subsequent vendor decisions. Vague objectives produce vague proposals.

Section 2: Compliance and Regulatory Requirements

List the specific regulatory frameworks the engagement must satisfy:

  • UAE PDPL (Federal Decree-Law No. 45 of 2021)
  • CBUAE Sovereign Financial Cloud (if banking)
  • DHA AI Guidelines (if healthcare)
  • DIFC Data Protection Law / ADGM Data Protection Regulations (if applicable)
  • KSA PDPL or NDMO requirements (if KSA touchpoint)
  • ISO/IEC 42001:2023 alignment
  • EU AI Act (if EU customer or operating exposure)
  • Sector-specific overlays (CBUAE, SAMA, etc.)

Per framework, ask vendors to:

  • Describe their experience with the framework
  • Provide named reference engagements where they navigated it
  • Outline how their proposed architecture satisfies the relevant controls

Section 3: Architecture Requirements

Specify the constraints, not the implementation. Strong constraint-based requirements:

  • Sovereignty boundary: all in-scope data must remain within UAE jurisdiction throughout processing — including embedding, retrieval, inference, and telemetry layers
  • Model licensing: open-weight models acceptable; proprietary cloud-hosted models acceptable only for explicitly out-of-scope workloads
  • Inference deployment: on-premise OR sovereign-cloud-region (e.g., AWS me-south-1) acceptable; specify the constraint, let vendor recommend
  • Language requirements: Arabic (MSA + specific Gulf dialects relevant to your customer base) and English; specify any other languages
  • Performance benchmarks: latency target (e.g., median under 2s, p95 under 4s); accuracy targets against representative golden set
  • Integration points: which existing systems must integrate (CRM, document store, IAM, audit logging)

Weak (avoid) requirements: "must use Llama 3.3 70B" (over-specifies), "AI must be fast" (under-specifies).

Section 4: Security and Operational Requirements

Specify:

  • Authentication: SSO / OIDC integration to your existing IAM; service identity boundaries
  • Authorisation: per-user permission inheritance; tool-level permission policies (especially for agentic workloads)
  • Audit trail: every consequential action logged immutably with retention period (per regulatory requirement)
  • Encryption: at-rest and in-transit, with key management satisfying your KMS standards
  • Backup and recovery: RTO/RPO requirements
  • Incident response: AI-specific incident response procedures expected; ask vendors to provide their template
  • Penetration testing: acceptance gated on third-party security review; specify your preferred testers if applicable

Section 5: Evaluation and Monitoring

Specify what evidence the vendor must produce:

  • Eval harness: golden set of representative cases, adversarial set of edge cases, scoring rubric
  • Pre-deploy evaluation: required pass rate against golden set before production cutover
  • Continuous monitoring: drift detection, score-distribution alerting, performance metrics
  • Reporting cadence: monthly operations review, quarterly business review, annual audit-readiness review

Ask vendors to describe how they would build the eval harness for your specific workload, not just describe their generic methodology.

Section 6: Vendor Qualifications

Specific to GCC sovereign AI:

  • Local entity: Dubai FZCO, mainland UAE entity, DIFC, ADGM, or equivalent regional registration
  • Reference engagements: at least 2 production deployments of sovereign AI or Arabic-LLM workloads in regulated industries
  • Hardware capability: demonstrated experience deploying NVIDIA H100/A100 inference clusters or equivalent
  • Compliance pedigree: ISO 27001 certified or actively pursuing ISO 42001 readiness
  • Partner credentials: AWS AI Competency, NVIDIA Inception, Anthropic Partner Network — useful signals where applicable

For nascent areas (Arabic-LLM specifically), accept adjacent evidence — vendors who have shipped open-weight LLM workloads in production but not specifically Arabic-LLM may still be strong fits if the adjacent capability is solid.

Section 7: Commercial Structure

Outline:

  • Engagement model (fixed-price project, retainer, hybrid)
  • Payment milestones tied to deliverables
  • Hardware procurement responsibility (vendor purchases or client purchases)
  • Ongoing operations (vendor-operated, client-operated, hybrid)
  • Exit terms — particularly important for sovereign infrastructure where switching cost is high
  • IP ownership — all custom code, prompts, and configurations should belong to the client; the vendor's IP is the methodology

Section 8: Evaluation Criteria

We'd recommend the following weights for most GCC sovereign AI RFPs:

CriterionWeight
Technical capability35%
Compliance and security posture25%
Regional and language fit15%
Commercial terms15%
References and operational track record10%

Adjust to your priorities. Two warnings: don't over-weight commercial terms (incentivises underbidding that surfaces as scope creep); don't under-weight references (under-tests for operational risk).

Section 9: Submission Requirements

Specify what vendors must submit:

  • Technical proposal addressing each architecture and security requirement
  • Compliance mapping showing how the proposed architecture satisfies each regulatory framework
  • Reference architecture diagram with explicit data flows
  • Eval methodology description with sample eval set
  • Named team with relevant credentials (cert holders, partner badges, prior project lead history)
  • 2–3 customer references with contact details (you'll call them)
  • Commercial proposal with milestone-tied payment structure
  • Sample SoW for the engagement structure

Specify page limits — sovereign AI RFPs often produce 80-page responses where 25 would be substantive. Force the vendor to be specific.

Section 10: Timeline

Set realistic dates:

  • RFP issue date
  • Q&A submission deadline (1 week after issue)
  • Q&A responses published (3 days after deadline)
  • Vendor response deadline (3–4 weeks after Q&A)
  • Shortlist notification (1 week after responses)
  • Shortlist demonstrations (2 weeks)
  • Reference checks (1 week)
  • Selection and contracting (2–3 weeks)

Total: 10–12 weeks from issue to contract. Compress at your peril.

What to do with this template

Adapt and use it. The structure above is what we'd want to receive from a sophisticated GCC procurement team — clear constraints, specific evidence requirements, sensible weighting, realistic timeline. RFPs structured this way produce proposals you can compare meaningfully.

If you're issuing an RFP and want a sounding board on structure or evaluation criteria — or if you want a vendor that responds well to RFPs structured this way — book a scoping call.

Where Codenovai fits

We respond to GCC sovereign AI RFPs as part of our standard sales motion. Our Sovereign AI + RAG offer maps directly to the architecture requirements in Section 3. Our ISO 42001 Readiness offer addresses Section 2. Our internal team includes engineers with the regional credentials Section 6 expects.

If you're shortlisting vendors for sovereign AI procurement, we welcome the chance to respond.

Three reasons. (1) Data residency requirements are more specific — UAE PDPL, CBUAE Sovereign Cloud, KSA NDMO, DIFC and ADGM regulatory overlays each have specific evidentiary expectations. (2) Language requirements include Arabic with dialectal coverage, which Western templates don't address. (3) Vendor evaluation criteria should weight regional licensing (FZCO, mainland UAE entity, KSA presence) and on-premise deployment capability higher than typical Western RFPs. A generic Western template misses these.

8 to 14 weeks is the realistic range. Issue and Q&A: 2 weeks. Vendor responses: 3–4 weeks. Shortlist evaluation including technical demonstrations: 2–3 weeks. Reference checks and contracting: 2–3 weeks. Faster timelines (4–6 weeks total) are possible for procurement teams who have done sovereign AI RFPs before; slower (16+ weeks) is common for first-time procurement teams that need internal alignment cycles.

Let vendors propose, with constraints. Specify the constraints — open-weight requirement, sovereignty boundary, Arabic language coverage, latency budget, performance benchmarks — and let vendors propose the model that meets those constraints. Specifying the model creates problems: you might not pick the right one, the model landscape changes during procurement, and you reduce vendor differentiation. The constraint-based approach gets you a stronger field of responses.

Ask for adjacent evidence. Have they shipped Arabic-LLM workloads? Have they deployed on-premise inference at production scale? Have they operated AI workloads in regulated industries? Have they handled UAE PDPL or CBUAE compliance on other engagements? Sovereign AI is new enough that pure-pedigree vendors are rare; the right evaluation looks at component capabilities. A vendor who has done 80% of the components on adjacent work is often a better choice than one with thin pure-sovereign pedigree.

For most GCC sovereign AI RFPs we'd recommend: technical capability 35%, compliance and security posture 25%, regional and language fit 15%, commercial terms 15%, references and operational track record 10%. Weighting commercial terms above 20% tends to incentivise underbidding that surfaces as scope creep mid-engagement. Weighting references below 10% tends to under-test for the operational risk that bites in year two.

Enjoyed this article?

Subscribe to our newsletter for more expert insights on AI, web development, and business growth in Dubai.