Extensible Responsible AI Technical Controls Evaluator (XRAE)

The Asian Development Bank (ADB) is seeking innovative open source solutions to design and build a lightweight, agentic Responsible AI (RAI) evaluation module, the eXtensible Responsible AI Technical Controls Evaluator (XRAE).

The Challenge

The Asian Development Bank (ADB) is seeking innovative open‑source solutions to design and build a lightweight, agentic Responsible AI (RAI) evaluation module, the eXtensible Responsible AI Technical Controls Evaluator (XRAE).
 
XRAE will translate ADB’s Responsible AI Framework (RAIF) into verifiable, automated technical controls, enabling consistent, scalable, and auditable assessments of AI solutions across Build and Procure use cases. The solution will help reduce reliance on manual specialist reviews while strengthening governance, transparency, and risk management across the AI lifecycle.
 
This opportunity is under the ADB Digital Sandbox Program. Click here to learn more about available opportunities in the program.
 

Objectives

Participants are invited to design and develop an open‑source, Python‑based agentic module that can:

  1. Translate the eight RAIF principles into automated, technically verifiable evaluators

  2. Execute artefact‑adaptive evaluations of AI systems and documentation

  3. Generate standardized RAIF‑mapped scorecards and reports

  4. Focus specifically on AI‑related risks, excluding generic application security checks already covered by existing tooling

 
 

EXPECTED DELIVERABLES

  • Open‑source XRAE agentic module with Software Bill of Materials (SBOM)

  • RAIF‑mapped evaluator definitions and control logic

  • JSON scorecard schemas and sample outputs

  • CLI and API interfaces

  • Supporting documentation and knowledge‑transfer materials

  • Reusable AI test suites (e.g., adversarial, fairness, toxicity, groundedness)

  • HTML/PDF reporting templates