This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking an AI Risk Management Administrator to be responsible for supporting both the establishment and ongoing operation of the enterprise AI governance and risk management framework. This role plays a key part in defining, implementing, and maturing AI governance structures while ensuring AI systems are developed, deployed, and operated in alignment with regulatory requirements, ethical standards, internal policies, and the organization’s risk appetite. This role will partner with business, IT, legal, security, and compliance teams to operationalize governance processes, monitor AI risks, maintain documentation, and enable responsible, scalable AI practices across the enterprise.
Job Responsibility:
Support the design, implementation, and continuous improvement of the enterprise AI governance structure, including operating models, roles, committees, and decision-making frameworks
Administer and maintain the AI risk management framework, including policies, standards, controls, and procedures, ensuring alignment with evolving governance needs
Bridge governance strategy and execution by supporting the operationalization of AI policies and standards across business and technical teams
Support AI system risk assessments across the AI lifecycle (design, development, deployment, operation, and retirement)
Maintain AI inventories, model documentation, risk registers, and audit artifacts (e.g., model cards, data lineage, controls evidence)
Monitor compliance with internal AI governance standards and external regulations (e.g., privacy, security, ethical AI requirements)
Coordinate governance forums and reviews with legal, compliance, security, and internal audit teams to address AI-related risks and ensure alignment with governance expectations
Track AI risk metrics, incidents, and remediation actions
prepare reports for leadership, risk committees, and regulators
Support third-party and vendor AI risk reviews, including due diligence and ongoing monitoring
Assist with AI policy training, awareness, and communication to drive adoption of governance practices across the organization
Contribute to the maturation of AI governance capabilities, including tooling, automation, and integration into enterprise risk and technology processes
Requirements:
Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, Information Systems, Risk Management, or a related field
5+ years of experience in AI/ML, data science, model governance, or technology risk management (not just general IT risk)
Demonstrated experience implementing or operationalizing governance frameworks (e.g., AI governance, model risk management, data governance)
Strong understanding of AI risks, including bias, explainability, model drift, data quality, and security vulnerabilities
Experience working with AI regulations and frameworks (e.g., NIST AI RMF, EU AI Act, ISO/IEC AI standards, SR 11-7 model risk guidance)
Ability to translate between technical teams (data science/engineering) and control functions (risk, legal, compliance)
Strong analytical, problem-solving, and stakeholder management skills