This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking an experienced AI Governance Engineer to design, build, and automate the technical controls and processes that underpin our enterprise AI Governance program. This role will also serve as a key security architect for AI systems, ensuring robust cybersecurity, risk management, and compliance across all AI initiatives. You will collaborate closely with cybersecurity, privacy, and legal teams to safeguard AI assets and data, and to ensure responsible, safe, and compliant use of AI throughout the organization.
Job Responsibility:
Technical Governance Architecture: Design and implement automated inventory collection, monitoring, and risk scoring for all AI systems, models, and tools
Security Integration: Architect and enforce security controls for AI systems, including access management, data protection, vulnerability scanning, and incident response protocols
Workflow Integration: Build and integrate governance controls, approval of workflows, and checkpoints into SDLC, and engineering pipelines
Automation & Tooling: Develop tools for auditing, scanning, and evaluating AI models (e.g., for toxicity, drift, hallucinations). Implement model documentation standards (model cards, evaluation pipelines)
Cyber Risk Management: Conduct risk assessments for AI systems, identify potential threats, and implement mitigation strategies. Collaborate with cybersecurity teams to ensure AI systems comply with organizational and regulatory security standards
Compliance & Oversight: Support evidence collection for audits (SOC2, ISO, internal/external regulators). Ensure technical controls meet regulatory and policy requirements
Collaboration: Work closely with Security, Privacy, Legal, Data Governance, and Engineering teams to embed governance and security into product and platform development
Continuous Improvement: Monitor emerging risks, regulatory changes, and industry best practices. Enhance automation, controls, and technical documentation as needed
Enablement: Support training and enablement initiatives by building toolkits and technical resources for engineers and business users
Requirements:
5+ years in DevOps, MLOps, platform engineering, backend development, or cybersecurity, with a focus on automation and controls
Strong programming skills (Python preferred), experience with APIs, and automation frameworks
Experience integrating governance, risk, or compliance controls into engineering workflows
Deep understanding of cybersecurity principles, threat modeling, vulnerability management, and incident response
Familiarity with regulatory frameworks (NIST AI RMF, ISO 42001, SOC2, EU AI Act) and AI safety/fairness concepts
Experience building tools for model evaluation, monitoring, and documentation
Analytical, troubleshooting, and documentation skills
Excellent collaboration and communication skills
Nice to have:
Experience with cloud security, identity and access management, and secure software development practices
Understanding of LLM architectures and AI model evaluation (bias, fairness, interpretability, drift, hallucinations)
Experience with audit, evidence collection, and compliance testing
Familiarity with JIRA, CI/CD tools, and cloud platforms
What we offer:
Flexible working arrangements
Programs and plans for a healthy mind, body, wallet and life