CrawlJobs Logo

AI Safety Policy & Operations

elevenlabs.io Logo

ElevenLabs

Location Icon

Location:
United Kingdom

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We're looking for someone to join our Safety team and own key outcomes across policy, automation, and enterprise guardrails. You'll design integrity policies aligned with global regulations, and shape how enterprises implement guardrails when building on our APIs. You'll work within an established Safety organization, partnering with Safety Engineers, operations, and investigations while taking full ownership of your verticals. This means driving policy decisions end-to-end, architecting automation that multiplies the team's impact, and defining the safety frameworks our enterprise customers rely on. We're looking for a generalist who thrives owning outcomes, not someone who only knows one slice of Trust & Safety. You should be comfortable with ambiguity, fluent in the regulatory landscape, and excited to use AI to make safety operations faster, broader, and more robust.

Job Responsibility:

  • Design and evolve safety policies for audio AI, image/video AI and agentic safety. Aligned with ISO42001, EU AI Act, DSA, US state laws, and global regulatory developments
  • Build scalable, AI-powered systems and workflows that dramatically reduce response times and increase policy coverage
  • Partner with Safety Engineers to translate policy requirements into automated detection, moderation, and enforcement systems
  • Drive cross-functional safety integration with product, engineering, legal, and operations teams, ensuring safety is embedded into the development lifecycle, not bolted on after
  • Respond to safety policy escalations: partner closely with moderation and investigations teams to triage, investigate, and resolve complex incidents, ensuring decisive and transparent action when user or platform integrity is at risk

Requirements:

  • Broad experience across Trust & Safety: policy, operations, investigations, and content moderation, not just one specialty
  • Track record of owning and delivering safety outcomes end-to-end, ideally in fast-moving, engineering-first environments
  • Deep familiarity with the global AI regulatory landscape: EU AI Act, DSA, US state laws, and emerging frameworks
  • Technically conversant: comfortable with dashboards, SQL, and ML concepts, able to collaborate deeply with engineers, able to read automation in python
  • Strong risk calibration: you know when to move fast, when to pause, and how to balance user safety with product velocity
  • Exceptional communicator who can translate complex safety considerations for engineers, product managers, executives, and external stakeholders

Nice to have:

  • Audio or voice-specific Trust & Safety experience (voice cloning, synthetic media, audio deepfakes)
  • Experience in engineering-first organizations where safety shipped alongside product
  • Background designing safety frameworks for enterprise customers or API platforms
  • Familiarity with AI/ML pipelines and how to build guardrails into model deployment
What we offer:
  • Innovative culture
  • Growth paths
  • Learning & development: ElevenLabs proactively supports professional development through an annual discretionary stipend
  • Social travel: We also provide an annual discretionary stipend to meet up with colleagues each year, however you choose
  • Annual company offsite
  • Co-working: If you’re not located near one of our main hubs, we offer a monthly co-working stipend

Additional Information:

Job Posted:
February 04, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for AI Safety Policy & Operations

Safety Policy Manager, AI Youth Well-Being & Child Safety

Meta’s Safety Policy team is seeking an experienced policy manager to support ou...
Location
Location
United States , Austin, TX +1 location
Salary
Salary:
153000.00 - 224000.00 USD / Year
meta.com Logo
Meta
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of experience in policy and product counsel related to youth, child safety, or AI
  • Demonstrated experience thinking and moving quickly to produce results under tight deadlines, juggling different projects at once, find pragmatic and creative solutions to business issues, provide concise and business-focused advice, and make sound judgments
  • Experience in effectively working cross-functionally with numerous teams across public affairs, legal, operations, programs and product
  • Experience with consensus-building
  • Experience of consistently working under your own initiative, seeking feedback and input where appropriate
  • Experience working with policy stakeholders and policymakers and representing companies at the ministerial-level
  • Experience working in a rapidly growing and productive environment
  • Demonstrated attention to detail and experience using analytical skills, authoring effective cross-functional and executive updates, communicating complex concepts with internal and external audiences
  • Demonstrated experience drafting policy documents and talking points and communicating with executives both written and orally
  • Proven experience of consistently working effectively with others, adapting to changing circumstances, and supporting colleagues when needed
Job Responsibility
Job Responsibility
  • Responsible for understanding the external landscape around AI policy and youth online safety and well-being, including regulatory, technology, and industry-wide developments worldwide
  • Responsible for understanding the global landscape of academics, civil society, advocates, organizations and other stakeholders focused on AI and youth
  • Work with internal and external stakeholders, to monitor, respond to and stay ahead of AI related youth online safety and well-being issues - including informing AI content and product policies, tools, operational responses, programs, and resources
  • Develop frameworks to proactively identify AI related online risks for youth and develop mitigation strategies, as well as effectively communicate said strategies to cross-functional partners
  • Advise and support Meta teams working on AI youth safety and well-being public policy, content and product policies, tools, operational responses, programs, and resources
  • Advise AI policy development teams on youth and child safety risks, ensuring systematic consultation and timely feedback on all youth/child-related AI policies
  • Provide input on youth and child safety risks in AI product decisions. Take an active role in product work-streams where Safety Policy expertise is required
  • Advise and help steer the AI policy research agenda, proactively engaging with internal and external experts
  • Draft and advise on talking points, legislative analysis, and advocacy positions for US federal, state, and global youth/AI policy issues. Support briefings and engagement with policymakers and regulators
  • Review and tailor communications for youth stakeholders, support external engagements, and ensure timely sharing of talking points
What we offer
What we offer
  • bonus
  • equity
  • benefits
  • Fulltime
Read More
Arrow Right

Product Policy – Policy Manager (Child Safety)

The Product Policy team is responsible for the development, implementation, enfo...
Location
Location
United States , San Francisco
Salary
Salary:
261000.00 - 290000.00 USD / Year
openai.com Logo
OpenAI
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years of experience in child safety, trust & safety, policy, investigations, intelligence, or a related field, with exposure to how technology platforms manage child-related risks
  • Deep expertise in the child safety ecosystem — particularly as it relates to generative AI
  • Strong understanding of the child safety and AI policy landscape
  • Ability to translate expertise into clear, practical guidance for product teams and company leadership
  • Comfort navigating complexity and ambiguity
  • Excellent communication skills with demonstrated ability to communicate with product managers, engineers, researchers, and executives alike
  • Comfortable with ambiguity and enjoys going 0 to 1
  • Creative thinker with an eye for opportunities to leverage data to inform policies
  • Understanding of how government, enterprise, and other stakeholders think about child safety related policy issues
Job Responsibility
Job Responsibility
  • Develop and maintain child-safety policy frameworks that govern how OpenAI products are designed, launched, and operated, including safeguards against exploitation, grooming, harmful content, and other child-related risks
  • Partner cross-functionally with Product, User Ops, Legal, Safety, Integrity, Communications, and Global Affairs to align on risk posture, product launches, incident response, and external commitments
  • Translate policy into practice by creating clear implementation standards, enforcement protocols, and escalation paths that engineering, operations, and integrity teams can embed directly into product and trust & safety systems
  • Identify opportunities to leverage data to inform our policy work
What we offer
What we offer
  • Offers Equity
  • Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
  • Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
  • 401(k) retirement plan with employer match
  • Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
  • Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
  • 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)
  • Mental health and wellness support
  • Employer-paid basic life and disability coverage
  • Annual learning and development stipend to fuel your professional growth
  • Fulltime
Read More
Arrow Right

Senior Policy Manager

At Bumble, we’re building a world where all relationships are healthy and equita...
Location
Location
United States; United Kingdom , Austin; New York; London
Salary
Salary:
125000.00 - 155000.00 USD / Year
bumble.com Logo
Bumble Inc.
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Typically requires 6–8 years of experience, though we welcome candidates with alternative backgrounds that demonstrate equivalent skills
  • Deep experience in Trust & Safety policy, ML governance, AI safety, or related domains within technology platforms
  • Strong understanding of LLM systems, content moderation architectures, labeling frameworks, model evaluation methodologies, and safety intervention mechanisms
  • Experience designing or contributing to Responsible AI governance frameworks, including risk assessment, bias mitigation, and human-in-the-loop systems
  • Demonstrated ability to translate between technical and non-technical stakeholders, bridging policy, engineering, legal, and operational perspectives with clarity and Respect
  • Proven ability to operate autonomously in ambiguous, fast-moving environments, taking ownership of complex initiatives and seeing them through from insight to impact
  • High fluency in data analysis and experimentation, with the ability to interpret enforcement metrics and drive data-informed decisions
  • Excellent written and verbal communication skills, including experience representing policy perspectives in cross-functional or regulator-adjacent discussions
  • Embodies Bumble’s values of Courage and Excellence by balancing innovation with responsible risk management, and approaches evolving AI systems with thoughtful curiosity and principled judgment
Job Responsibility
Job Responsibility
  • Lead the transition of policy governance from legacy operational models toward an LLM-first enforcement architecture, embedding appropriate guardrails, escalation pathways, and human oversight to minimize risk
  • Own one or more complex policy domains — or drive cross-policy alignment across issue areas — auditing, iterating, and strengthening frameworks to ensure robustness and responsiveness to member needs
  • Partner cross-functionally with Product, Engineering, Data Science, Legal, and Operations to ensure effective policy implementation and consistent enforcement across platforms and regions
  • Design and maintain policy lifecycle governance processes, improving transparency, efficiency, and alignment between enforcement systems and written standards
  • Develop and support Responsible AI frameworks, including model governance principles, labeling standards, safety interventions, and review mechanisms that reflect regulatory and ethical best practices
  • Oversee moderation system performance by defining and monitoring key enforcement metrics, identifying gaps in precision, recall, and consistency at scale
  • Build structured feedback loops with internal teams and external partners to surface emerging risks, sociocultural nuances, and operational friction points — demonstrating Curiosity and collaborative ownership
  • Support programs that maintain compliance with global online safety and platform regulations, ensuring documentation, audit readiness, and defensible policy decisioning
  • Use AI-enabled analytics tools responsibly to evaluate enforcement trends, stress-test policy outcomes, and generate insights that translate into measurable member impact
  • Fulltime
Read More
Arrow Right

Program Manager, Product Policy

We are looking for a program manager to strengthen the operational backbone of t...
Location
Location
United States , San Francisco
Salary
Salary:
261000.00 - 335000.00 USD / Year
openai.com Logo
OpenAI
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience running operational programs or processes in cross-functional environments
  • Strong organizational skills and attention to detail, with an instinct for improving systems
  • Comfort working with policy, technical, and operational stakeholders
  • Ability to execute independently while escalating issues appropriately
  • Background in policy / trust and safety
Job Responsibility
Job Responsibility
  • Own and improve Product Policy’s core operational processes (e.g., policy updates, cross-functional handoffs, tracking and reporting)
  • Build and maintain systems to measure progress, surface risks, and ensure accountability across workstreams
  • Coordinate day-to-day execution with operations, integrity, safety, and legal partners
  • Support priority initiatives by organizing workplans, driving follow-ups, and keeping execution on track
  • Develop lightweight tools, templates, and documentation that make the team more efficient and predictable
  • Translate policy requirements into clear operational steps for partner teams
What we offer
What we offer
  • Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
  • Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
  • 401(k) retirement plan with employer match
  • Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
  • Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
  • 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)
  • Mental health and wellness support
  • Employer-paid basic life and disability coverage
  • Annual learning and development stipend to fuel your professional growth
  • Daily meals in our offices, and meal delivery credits as eligible
  • Fulltime
Read More
Arrow Right

Safety Response Operations Lead

The Global Safety Response Operations Lead is a hands-on team lead who both mana...
Location
Location
Singapore
Salary
Salary:
Not provided
openai.com Logo
OpenAI
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years in Trust & Safety, Risk Operations, Investigations, Fraud, Annotation, or platform integrity
  • 4+ years of people leadership or senior-level operational ownership
  • strong decision-maker in ambiguous, high-risk environments
  • communicate complex safety and risk decisions clearly and credibly
  • translate between frontline operations and strategic stakeholders
  • skilled at influencing without authority
  • deeply familiar with content moderation, user safety, fraud, or developer risk frameworks
  • use data, tooling, and automation to improve quality, efficiency, and scale
  • comfortable leading in a 24/7, high-pressure operational environment
Job Responsibility
Job Responsibility
  • Lead and coach a regional team of Safety Response Analysts
  • Own regional operational outcomes, including utilization, SLA adherence, backlog health, and quality benchmarks
  • Handle and oversee the most complex and high-risk cases
  • Contribute directly to frontline work (20–30%), including investigations, enforcement decisions, and regulatory or legal escalations
  • Partner cross-functionally with Product, Policy, Legal, Investigations, and local market teams
  • Drive operational excellence and continuous improvement
  • Identify emerging risks and trends
  • Fulltime
Read More
Arrow Right

Content Integrity Analyst

We’re hiring experienced Trust & Safety / Content Integrity operators who can in...
Location
Location
United States , San Francisco
Salary
Salary:
252000.00 - 280000.00 USD / Year
openai.com Logo
OpenAI
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years in Trust & Safety, integrity, risk, policy enforcement
  • Experience working with vendors is a plus
  • Experience with high-severity safety domains (for example: CBRN, cyber abuse)
  • Experience building QA programs, calibration loops, and measurable reviewer performance systems
  • Hands-on experience writing requirements for internal tools, piloting automation, or partnering closely with Engineering on safety systems
Job Responsibility
Job Responsibility
  • Interpret and apply OpenAI’s usage policies to complex, novel scenarios
  • Provide clear guidance to customers and internal teams
  • Document edge cases and propose policy refinements
  • Triage, assess, and support actions on content and behavior that can drive real-world harm
  • Escalate appropriately and help drive cases to resolution
  • Support incident response and executive-visible escalations by producing clear assessments, recommending next steps, and coordinating with Legal, Compliance, Security, Product, and Engineering as needed
  • Design and operate processes for human-in-the-loop labeling, content/user reporting, appeals, enforcement actions, and continuous QA
  • Identify repeatable patterns, translate them into requirements, and partner with Engineering and Data teams to ship tooling and automation
  • Use quantitative and qualitative analysis to surface emerging abuse patterns, measure policy and tooling performance, and feed insights back into detection systems, product mitigations, and policy updates
  • Define and monitor KPIs, build calibration and QA programs, iterate on reviewer training, and improve guidelines and tooling based on error analysis
What we offer
What we offer
  • Offers Equity
  • Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
  • Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
  • 401(k) retirement plan with employer match
  • Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
  • Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
  • 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick or safe time (1 hour per 30 hours worked, or more, as required by applicable state or local law)
  • Mental health and wellness support
  • Employer-paid basic life and disability coverage
  • Annual learning and development stipend to fuel your professional growth
  • Fulltime
Read More
Arrow Right

Learning & Development Manager, AI Business

As Learning & Development Manager, AI Business you will own the learning system ...
Location
Location
Mexico , Mexico City
Salary
Salary:
Not provided
prolific.com Logo
Prolific
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5–7+ years of experience in Learning & Development, Training, or Enablement, ideally within tech, AI, research operations, or trust and safety environments
  • Proven experience building and scaling training programmes for large, distributed operational or service teams across multiple regions
  • Strong instructional design skills for complex, high-judgement work, including policy-heavy, safety-critical, or research-grade workflows
  • Comfort using operational and quality data to prioritise learning needs and measure training effectiveness
  • Experience designing assessments that go beyond basic quizzes, such as work samples, simulations, and calibration-based evaluation
  • A collaborative approach, with a track record of partnering with Operations, Quality, Product, and Research teams to drive measurable improvements
  • Clear written and verbal communication skills, with the ability to translate complex requirements into practical, accessible training
Job Responsibility
Job Responsibility
  • Own the end-to-end learning ecosystem for AI Services, defining curricula and learning paths across roles, levels, and workflows
  • Design and scale onboarding, upskilling, and certification programmes that reduce time-to-proficiency and support consistent delivery across regions
  • Translate quality data, error trends, policy updates, and client or researcher feedback into targeted training and rapid-response modules
  • Design assessments and certifications that reflect real task complexity, including simulations, calibration exercises, and scenario-based evaluations
  • Define and track L&D metrics such as ramp curves, post-training quality deltas, and recertification outcomes, using data to continuously improve impact
  • Partner with Quality, Operations, Research, and AI Safety teams to ensure training reflects current standards, expectations, and escalation paths
  • Ensure all training content aligns with Prolific’s commitments to participant welfare, research integrity, and responsible AI
What we offer
What we offer
  • competitive salary
  • benefits
  • equity
  • opportunity to earn a cash variable element, such as a bonus or commission
  • hybrid working
  • impactful, mission-driven culture
  • Fulltime
Read More
Arrow Right

Learning & Development Manager, AI Business

Prolific isn’t just enabling AI innovation – we’re redefining it. While foundati...
Location
Location
United Kingdom
Salary
Salary:
Not provided
prolific.com Logo
Prolific
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5–7+ years of experience in Learning & Development, Training, or Enablement, ideally within tech, AI, research operations, or trust and safety environments
  • Proven experience building and scaling training programmes for large, distributed operational or service teams across multiple regions
  • Strong instructional design skills for complex, high-judgement work, including policy-heavy, safety-critical, or research-grade workflows
  • Comfort using operational and quality data to prioritise learning needs and measure training effectiveness
  • Experience designing assessments that go beyond basic quizzes, such as work samples, simulations, and calibration-based evaluation
  • A collaborative approach, with a track record of partnering with Operations, Quality, Product, and Research teams to drive measurable improvements
  • Clear written and verbal communication skills, with the ability to translate complex requirements into practical, accessible training
Job Responsibility
Job Responsibility
  • Own the end-to-end learning ecosystem for AI Services, defining curricula and learning paths across roles, levels, and workflows
  • Design and scale onboarding, upskilling, and certification programmes that reduce time-to-proficiency and support consistent delivery across regions
  • Translate quality data, error trends, policy updates, and client or researcher feedback into targeted training and rapid-response modules
  • Design assessments and certifications that reflect real task complexity, including simulations, calibration exercises, and scenario-based evaluations
  • Define and track L&D metrics such as ramp curves, post-training quality deltas, and recertification outcomes, using data to continuously improve impact
  • Partner with Quality, Operations, Research, and AI Safety teams to ensure training reflects current standards, expectations, and escalation paths
  • Ensure all training content aligns with Prolific’s commitments to participant welfare, research integrity, and responsible AI
What we offer
What we offer
  • competitive salary
  • equity
  • benefits
  • hybrid working
  • Fulltime
Read More
Arrow Right