This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We're looking for someone to join our Safety team and own key outcomes across policy, automation, and enterprise guardrails. You'll design integrity policies aligned with global regulations, and shape how enterprises implement guardrails when building on our APIs. You'll work within an established Safety organization, partnering with Safety Engineers, operations, and investigations while taking full ownership of your verticals. This means driving policy decisions end-to-end, architecting automation that multiplies the team's impact, and defining the safety frameworks our enterprise customers rely on. We're looking for a generalist who thrives owning outcomes, not someone who only knows one slice of Trust & Safety. You should be comfortable with ambiguity, fluent in the regulatory landscape, and excited to use AI to make safety operations faster, broader, and more robust.
Job Responsibility:
Design and evolve safety policies for audio AI, image/video AI and agentic safety. Aligned with ISO42001, EU AI Act, DSA, US state laws, and global regulatory developments
Build scalable, AI-powered systems and workflows that dramatically reduce response times and increase policy coverage
Partner with Safety Engineers to translate policy requirements into automated detection, moderation, and enforcement systems
Drive cross-functional safety integration with product, engineering, legal, and operations teams, ensuring safety is embedded into the development lifecycle, not bolted on after
Respond to safety policy escalations: partner closely with moderation and investigations teams to triage, investigate, and resolve complex incidents, ensuring decisive and transparent action when user or platform integrity is at risk
Requirements:
Broad experience across Trust & Safety: policy, operations, investigations, and content moderation, not just one specialty
Track record of owning and delivering safety outcomes end-to-end, ideally in fast-moving, engineering-first environments
Deep familiarity with the global AI regulatory landscape: EU AI Act, DSA, US state laws, and emerging frameworks
Technically conversant: comfortable with dashboards, SQL, and ML concepts, able to collaborate deeply with engineers, able to read automation in python
Strong risk calibration: you know when to move fast, when to pause, and how to balance user safety with product velocity
Exceptional communicator who can translate complex safety considerations for engineers, product managers, executives, and external stakeholders