This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The AI Governance and Test Engineer plays a critical role in ensuring the responsible and effective deployment of AI solutions. This position is central to upholding regulatory compliance, meticulously monitoring the performance and accuracy of AI models, and conducting rigorous performance testing to validate the reliability and efficiency of AI objects throughout their lifecycle. The successful candidate will bridge the gap between AI development, operational excellence, and robust governance frameworks.
Job Responsibility:
Proactively track and manage critical dates related to AI model lifecycles, including limitation expirations, policy decisions (PDs), and scheduled model updates
Prepare, review, and submit all necessary risk artifacts, ensuring strict adherence to internal policies and external regulatory requirements
Coordinate and facilitate review sessions with Model Risk Management (MRM) and other pertinent stakeholders to ensure AI models consistently meet established governance, risk, and compliance standards
Conduct in-depth data analysis of AI model usage patterns, performance metrics, and operational data to identify trends, anomalies, and areas for continuous improvement
Perform detailed manual Subject Matter Expert (SME) reviews of AI object outputs to rigorously assess and confirm their accuracy, relevance, and alignment with defined business objectives
Oversee and coordinate the implementation of updates to AI objects, aiming to minimize disruption while maximizing effectiveness and efficiency
Execute fine-tuning processes for AI objects to optimize their performance, accuracy, and responsiveness based on ongoing monitoring results, feedback loops, and evolving requirements
Design, develop, and execute comprehensive performance tests for AI objects to evaluate their scalability, responsiveness, stability, and resource utilization under various load or hyper-parameter conditions
Analyze detailed test results to pinpoint performance bottlenecks, potential failure points, and areas for optimization within AI systems
Collaborate closely with AI development teams to implement performance improvements, conduct re-testing, and ensure solutions meet or exceed established performance benchmarks and non-functional requirements
Requirements:
Proven experience in AI/Machine Learning model development, operations, or testing
Strong understanding of AI/ML lifecycle, concepts, and deployment challenges
Familiarity with risk management, governance, and compliance frameworks, preferably within a regulated industry
Proficiency in data analysis tools and techniques, with the ability to interpret complex datasets
Experience with performance testing methodologies and tools
Excellent analytical, problem-solving, and critical thinking skills
Strong communication, collaboration, and stakeholder management abilities
Ability to work independently and as part of a cross-functional team in a fast-paced environment
What we offer:
27 days annual leave (plus bank holidays)
A discretional annual performance related bonus
Private Medical Care & Life Insurance
Employee Assistance Program
Pension Plan
Paid Parental Leave
Special discounts for employees, family, and friends
Access to an array of learning and development resources