OCP Logo
OCP Logo

Buying AI

Tips and tools

Region

Share

Solicitation tips for AI

WhoProcurement, project, and technology teams
WhatBest practices for solicitations for AI that go beyond off-the-shelf purchases

These tips can help if you are seeking to customize or build an AI solution.

As you develop your approach…
As you develop your solicitation documents and requirements…
As you draft your RFI/RFP documents, include…

In line with good procurement practices, make sure to keep your RFP long enough to enable vendors to prepare thoughtful responses, and spend time and budget to spread the word to vendors whom you might want to attract.

Resources for drafting solicitations

Solicitation tips: Key questions

Procurement and organizational objectives

  • Decide on your procurement procedure
  • Draft your solicitation
TeamQuestions
Procurement

Project needs

  • How might we reflect the project team’s needs in a RFP?
  • Can the vendor demonstrate a roadmap and flexibility that aligns with our evolving needs?
  • What level of vendor support or service guarantees will we need if something breaks or goes wrong?
  • Will the vendor support integration with other AI tools or models we may adopt in the future?

Operational considerations

  • What kind of selection criteria should be used to rank solicitation responses?
  • What commercial or pricing models are appropriate?
  • Are the right people in the room for this procurement?

Risk and compliance

  • How will we ensure compliance with current and evolving AI-related legislation, procurement rules, and standards?
  • How will we manage and track AI-specific costs (e.g., compute, storage, API usage)?
  • How much risk does vendor lock-in pose in our context, and what mitigations are practical?
Agency or department buyer

AI solution

  • Do we know what “good enough” model response performance looks like for our specific use case, and what metrics will we use to measure AI success and ROI (qualitative and quantitative)?
  • What are our fallback strategies if the model gives low-quality or inaccurate responses?
  • How will users interact with the AI solution? If there is a platform or dashboard, who is responsible for developing it?
  • Can we retrieve or export our data and models if we switch vendors?

Data strategy and management

  • Do we and our vendor partners understand the quality, completeness, and relevance of our data?
  • What role will the vendor play in helping us prepare or transform data for effective use?
  • How will we manage data ownership, licensing, and usage rights?
  • What safeguards will the vendor provide to ensure that sensitive, personal, or classified data is handled in line with legal and policy requirements?
  • Will the AI system retain or learn from our data inputs? Under what terms, and with what controls?
  • Is the algorithm specialized for our organization? If so, will we have input into how the algorithm is fine-tuned or retrained using our data? What additional data sources do we need to get the best results?

Operational considerations

  • How might we approach AI implementation in stages, such as through a pilot or sandbox?
  • How will we measure and monitor model performance over time?
  • What policies will govern human-in-the-loop oversight for critical decision-making?
  • What mechanisms will we put in place for continuous performance monitoring and system health checks?
  • What accident response mechanisms will we put in place? What processes will we follow to learn from accidents and improve their prevention and the management of associated risks and harms?
  • If we are involved in retraining or fine-tuning the model, what internal processes will we establish for prompt engineering and optimization?
  • Who owns operational responsibility for uptime, error handling, and incident response?
  • What logging and audit mechanisms will be used for AI inputs and outputs?
  • How will we ensure traceability and version control for prompts and model configurations?

Risk and compliance

  • What processes will we establish for bias detection, mitigation, and fairness audits?
  • What controls will be in place for data privacy, consent, and security (at rest and in transit)?
  • How will we meet model explainability requirements, especially for high-impact or regulated use cases?
  • How will we manage IP and copyright issues relating to AI-generated content?
  • How will we assess and audit third-party AI providers for security, ethical alignment, and accountability?
  • What safeguards will we implement against hallucination, misuse, or reputational risk in public-facing services?
  • How will we address bias detection and adversarial robustness?
IT/Data analytics

Project needs

  • What are our requirements for model transparency and explainability?
  • How will we measure and monitor model performance over time?
  • What are our requirements for model updates and maintenance?
  • What level of customization do we require?
  • What are the latency, reliability, and availability requirements for our applications?
  • Are there specific regulatory cybersecurity requirements, such as around data residency or geographic deployment?

Integration

  • How will the AI solution integrate with our existing systems?
  • Does the vendor support open standards and interoperability (e.g., prompt formats, model interchange)?
  • How do open-source and proprietary models compare in terms of cost, performance, and vendor support and accountability?
  • Are there clear exit strategies or migration paths if we outgrow a particular vendor or need to switch platforms?

Data strategy and management

  • How will we jointly manage data governance – including storage, audit trails, and compliance with data retention and deletion policies?
  • Are there opportunities to collaborate with vendors on domain-specific dataset creation or fine-tuning?
  • What processes are in place for anonymization or redaction of data before it enters AI systems?
  • How does the vendor support discoverability and metadata management for both inputs and outputs?
  • How will data flows be documented, monitored, and audited across the lifecycle?