This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Gamma is tackling one of the most critical challenges for any fast-growing AI platform: keeping millions of users safe while maintaining the creative freedom that makes our product magical. As a Trust & Safety Engineer, you'll architect the systems that protect our platform from phishing, abuse, fraud, and malicious content—all while ensuring Gamma stays fast, reliable, and delightful to use. You'll join a small, high-impact team building the foundation for trust at scale. This role combines systems thinking with rapid execution: you'll design infrastructure that serves millions, ship improvements daily, and collaborate across engineering, product, and design to define how Gamma approaches safety long-term.
Job Responsibility:
Own detection and prevention systems for fraud, abuse, spam, and malicious content across millions of daily users
Design and build scalable trust infrastructure including rate limiting, content scanning, anomaly detection, and account security
Ship trust and safety features daily across our stack
Build tools to empower our internal support teams to investigate and take action on suspicious & malicious activity
Partner with engineering teams and product managers to balance security with user experience
Investigate and resolve complex security incidents with limited context
Shape Gamma's long-term trust and safety strategy and technical roadmap
Requirements:
5+ years as a software engineer with a focus on abuse, spam, or fraud prevention
Strong systems thinking and experience building highly-available web APIs
Hands-on experience implementing trust features like rate limiting, content detection, and fraud prevention
Proficiency in at least one programming language and comfort working across a modern web stack
Experience with event streaming systems (Redis, Kafka, or similar)
Passion for security, user protection, and solving problems at scale
Nice to have:
Computer science degree or equivalent experience
Experience with AI prompting and LLMs for content moderation
Familiarity with TypeScript, Prisma, Apollo GraphQL, or AWS