Job Description:
At OpenAI, our Trust, Safety & Risk Operations teams safeguard our products, users, and the company from abuse, fraud, scams, regulatory non-compliance, and other emerging risks. We operate at the intersection of operations, compliance, user trust, and safety working closely with Legal, Policy, Engineering, Product, Go-To-Market, and external partners to ensure our platforms are safe, compliant, and trusted by a diverse, global user base. We support users across ChatGPT, our API, enterprise offerings, and developer tools handling sensitive inbound cases, building detection and enforcement systems, and scaling operational processes to meet the demands of a fast-moving, high-stakes environment. We are seeking experienced, senior-level analysts who specialize in one or more of the following areas: Content Integrity & Scaled Enforcement – Detecting, reviewing, and acting on policy violations, harmful content, and emerging abuse patterns at scale; Emerging Risk Operations – Identifying, triaging, and mitigating new and complex safety, policy, or integrity challenges in a rapidly evolving AI landscape. In this role, you will own high-sensitivity workflows, act as an incident manager for complex cases, and build scalable operational systems; including tooling, automation, and vendor processes that reinforce user safety and trust while meeting our legal, ethical, and product obligations. This role may involve exposure to sensitive content, including material that is sexual, violent, or otherwise disturbing.