This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As the advertising ecosystem expands, sophisticated adversarial actors are leveraging generative AI, automation, and distributed infrastructure to bypass safety controls. The Ads Trust and Safety team requires a Principal Applied Scientist to contribute to the research and technical strategy for Threat Modelling team. We are looking for a security domain expert who can advance the state of the art in Threat Modeling, and Adversarial Defense. This role involves transitioning trust mechanisms from static verification to dynamic, behavioral-based integrity systems. You will architect solutions to detect and neutralize high-complexity fraud vectors eg phishing, Payment fraud, cloaking, malware distribution, token misuse, and authentication etc ensuring the ads platform remains safe for users, advertisers and publishers. The primary success metric is the robust identification and mitigation of advanced abuse vectors with minimal impact on legitimate advertiser friction and ad-serving latency.
Job Responsibility:
Strategic Threat Modeling: Develop and maintain comprehensive adversarial frameworks to map the lifecycle of emerging threats, from account compromise (ATO) to malicious payload delivery
Evolution of Advertiser Trust: Advance the continuous, signal-based security protocol. Research and implement behavioral biometrics and Proof of Liveness models to detect synthetic identities and coordinated fraud rings
Adversarial Research: Proactively identify 'unknown unknown' vulnerabilities through red-teaming and exploratory data analysis, developing models to predict attacker behavior before widespread exploitation
Technical Leadership: Drive the technical roadmap for integrity and security, mentoring senior engineers and influencing cross-functional stakeholders on security investment priorities
Requirements:
Bachelor’s, Master’s, or PhD degree in Computer Science, Cybersecurity, Mathematics, or a related field, with 10+ years of related experience
Deep technical expertise in Cybersecurity, Anti-Abuse, or Adversarial Machine Learning
Strong programming skills in C++ or Python (at least one is required), with experience in building production-quality security or ML systems
Hands-on experience in one or more of the following: Web Security standards and Authentication Protocols (OAuth, OIDC), Malware analysis, de-obfuscation, or reverse engineering, Building fraud detection models at scale
Proven ability to design and implement defense mechanisms against complex abuse vectors (e.g., botnets, synthetic identity, evasion/cloaking)
Strong communication and collaboration skills, with experience articulating complex security risks to business and product leadership
Nice to have:
5+ years of experience in an Adversarial/Trust & Safety role at a major internet platform or cybersecurity firm
Familiarity with the Ad-Tech stack (RTB, OpenRTB) and associated fraud incentives
Background in Graph Neural Networks (GNNs) for fraud ring detection or behavioral biometrics
Track record of impact via security research publications, patents, or contributions to industry security standards