This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking an experienced and versatile AI Governance Lead to build, operationalize, and continuously improve our enterprise AI Governance program. This role is responsible for defining and implementing the policies, controls, processes, and education necessary to ensure responsible, safe, and compliant use of AI across the organization. The ideal candidate will have a strong background in governance, risk, compliance, technical controls, and enablement, with the ability to collaborate across business, technical, and legal teams.
Job Responsibility:
Program Leadership: Establish and manage the AI governance framework, operating model, and roadmap. Drive the creation and rollout of Responsible AI policies, standards, and training programs
Policy & Risk Management: Define and maintain policies for Responsible AI, acceptable use, and risk assessment frameworks. Partner with Legal, Privacy, Security, and Data Governance to ensure regulatory alignment (NIST RMF, ISO, SOC2, EU AI Act, etc.)
Oversight & Audit: Maintain an inventory of all AI systems, tools, and vendors. Conduct periodic audits and compliance reviews. Track remediation and risk mitigation activities
Technical Controls: Automate inventory collection, monitoring, and risk scoring. Integrate governance controls into SDLC and engineering workflows. Build and maintain tools for auditing, scanning, and model evaluation
Education & Enablement: Design and deliver AI training programs, including an AI Champions Program. Develop learning content, run workshops, and track adoption and competency metrics
Stakeholder Management: Lead cross-functional committees and manage relationships with key stakeholders. Drive adoption of governance processes and ensure effective change management
Continuous Improvement: Monitor industry trends, regulatory changes, and emerging risks. Continuously enhance governance practices, tools, and education
Requirements:
7+ years of experience in governance, risk, compliance, platform engineering, or technical enablement roles
Strong program management and cross-functional leadership skills
Experience with regulatory frameworks (NIST AI RMF, ISO 42001, SOC2, EU AI Act) and AI safety/fairness concepts
Technical proficiency in Python, APIs, automation, and integrating controls into engineering pipelines (DevOps/MLOps experience a plus)
Excellent communication, policy writing, and curriculum development skills
Ability to translate complex technical concepts into clear policies and training
Experience building and running training or enablement programs is highly desirable
Nice to have:
Familiarity with LLM architectures and AI model evaluation (bias, fairness, interpretability, drift, hallucinations)
Experience with audit, evidence collection, and compliance testing
Strong analytical, documentation, and dashboarding skills