This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Lead AI Safety and Enablement Engineer ensures the safe, reliable and scalable use of AI and machine learning across Exact Sciences. This role focuses on developing and implementing systems, tools, and frameworks that embed responsible AI principles into the organization’s technology ecosystem. The position combines software engineering expertise with a strong understanding of AI risk management, compliance, and observability.
Job Responsibility:
Support implementation of enterprise standards for AI safety, transparency, and reliability
Develop and maintain shared AI safety tools such as model catalogs, metadata registries, and monitoring systems for bias, drift, and performance
Build APIs, templates, and SDKs that integrate governance and validation into AI/ML development pipelines
Contribute to the design of observability and telemetry solutions for continuous monitoring of model and data quality
Collaborate with Legal, Privacy, Compliance, and InfoSec teams to translate AI policies into automated controls and technical safeguards
Work with ML and GenAI teams to embed validation and safety checkpoints in AI workflows
Participate in cross-functional reviews and discussions supporting AI governance and responsible AI practices
Promote awareness and adoption of responsible AI principles through tools, documentation, and knowledge sharing
Support and comply with the company’s Quality Management System policies and procedures
Uphold company mission and values through accountability, innovation, integrity, quality, and teamwork
Maintain regular and reliable attendance
Ability to act with an inclusion mindset and model these behaviors for the organization
Requirements:
Bachelor’s degree in Computer Science, Software Engineering, Artificial Intelligence, or a related field
8 years of experience in software, ML systems, or platform engineering
Practical experience with AI lifecycle tools (e.g., MLflow, Arize, WhyLabs, Label Studio)
Proficiency in Python, CI/CD processes, and cloud platforms (AWS, Azure, or GCP)
Experience developing scalable and compliant ML systems and tools
Applicants must be currently authorized to work in country where work will be performed on a full or part-time basis
Nice to have:
Master’s degree in Computer Science, Data Engineering, or AI
Experience implementing AI assurance, observability, or risk management frameworks
Knowledge of GenAI, LLM evaluation, and prompt safety practices
Familiarity with FDA, HIPAA, or GxP compliance standards
What we offer:
Relocation assistance provided to those not local to the area and willing to relocate
Paid time off (including days for vacation, holidays, volunteering, and personal time)
Paid leave for parents and caregivers
A retirement savings plan
Wellness support
Health benefits including medical, prescription drug, dental, and vision coverage