This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Security is one of the most critical priorities for our customers in a world of growing digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world safer by empowering every user, customer, and developer with a security cloud that delivers end-to-end, simplified protection. The Microsoft Security organization advances this mission by helping secure digital technology platforms, devices, and clouds across customers’ heterogeneous environments, while also protecting Microsoft’s internal estate. Our culture is grounded in a growth mindset, inspiring excellence, and enabling teams and leaders to bring their full potential each day. The Microsoft Threat Protection Research (MTP-R) Purple Team sits at the intersection of offense, defense, and intelligence, working across Microsoft Defender technologies to ensure telemetry, detections, and protections are effective against real-world cyberattacks. We are looking for a senior-level red team security researcher with experience in adversary emulation, offensive tooling, and malware development to design and execute realistic attack simulations in an AI-first environment. This role will use agentic systems and LLM-driven workflows to scale attack development, automation, and simulation fidelity, while helping shape how AI-enabled offensive research is used to emulate modern adversaries in controlled, high-impact ways.
Job Responsibility:
Design and execute adversary simulations that emulate real-world threat actors across endpoint, identity, cloud, and SaaS environments
Develop and modify offensive tooling, including custom payloads, loaders, and command-and-control (C2) frameworks
Conduct malware development and tradecraft research to replicate modern attacker techniques such as evasion, persistence, and lateral movement
Leverage threat intelligence to inform adversary emulation scenarios, including campaign design, TTP selection, and operational sequencing
Apply threat modeling frameworks such as MITRE ATT&CK to emulate realistic attack paths and identify defensive gaps
Utilize AI-enabled and agentic systems to generate attack variations, automate tradecraft execution, and scale simulation coverage
Partner with blue team and detection engineering teams to validate detections and improve defensive capabilities
Analyze telemetry generated from simulations to assess detection coverage and identify opportunities for improvement
Contribute to simulation reports, technical documentation, and internal knowledge sharing
Collaborate across teams to improve offensive tooling, methodologies, and research practices
Requirements:
Doctorate in Statistics, Mathematics, Computer Science, Computer Security, or related field
OR Master's Degree in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 3+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection
OR Bachelor's Degree in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 4+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection
OR equivalent experience
Ability to meet Microsoft, customer and/or government security screening requirements
Microsoft Cloud Background Check
Nice to have:
Doctorate in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 3+ years experience
OR Master's Degree AND 6+ years experience
OR Bachelor's Degree AND 8+ years experience
OR equivalent experience
3+ years of experience with coding
2+ years of experience in red team operations, adversary emulation, or offensive security research
1+ years of experience with large language models or machine learning
Experience in classical and deep learning machine learning methods
1+ years of experience performing threat intelligence research
Security related certifications such as OSCP, OSWE, GPEN, GREM, GCPN