Job Description:
The Account & Platform Integrity team protects OpenAI’s ecosystem from fraud, impersonation, abuse, and account-level threats. We ensure that the people and organizations using OpenAI are who they claim to be, that access is used appropriately, and that bad actors are prevented from exploiting the platform. We operate at the intersection of identity, access, compliance, and abuse prevention, working closely with Product, Engineering, Legal, Go-To-Market, and Support teams to stop harmful activity before it impacts users, customers, or the business. Our work directly protects revenue, user trust, and platform safety across ChatGPT, the API, and enterprise products. We’re hiring a Fraud & Risk Analyst to help safeguard OpenAI by investigating, validating, and monitoring customer accounts and organizations. You will focus on identity, legitimacy, and risk, ensuring accounts are properly verified, access is appropriate, and emerging threats are detected early. You’ll handle sensitive and high-stakes investigations involving fraud, impersonation, sanctions, misuse of access, and coordinated abuse. Your work will directly influence who can use OpenAI’s products and how safely we can scale. Note: This role may involve reviewing sensitive, confidential, or disturbing content.