This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Come build community, explore your passions and do your best work at Microsoft with thousands of University interns from every corner of the world. This opportunity will allow you to bring your aspirations, talent, potential - and excitement for the journey ahead. As an AI Security Research Intern in the Autonomous Attack Disruption team, you will join the frontlines of Microsoft Defender's mission to stop attacks in near real-time. Under the mentorship of experienced researchers, use AI to analyze real-world attacker TTPs and build systems that autonomously detect and disrupt attacks before adversaries reach their goals, including agentic pipelines and LLM-based threat analysis. This role requires a blend of applied security research expertise, AI fundamentals, and engineering skills to deliver production-ready protection at a global scale. This is your chance to see your AI-powered research transformed into autonomous defense systems that protects millions of users. At Microsoft, interns are embedded directly into research cycles, working on high-stakes projects that solve real-world security challenges. You will collaborate with global teams to translate complex research into automated protection logic that stops attackers in near real-time. You will work at the intersection of large language models, agentic AI frameworks, and security research - an area where the field is being defined in real time. You'll be empowered to build community, explore your passions, and achieve your goals. This is your chance to bring your solutions and ideas to life while working on cutting-edge technology.
Job Responsibility:
Investigate real-world advanced attacker TTPs and apply AI techniques
Apply security expertise combined with AI-driven methods to analyze massive telemetry sets
Contribute to the design and implementation of AI-powered capabilities
Assist in the refinement of protection coverage by analyzing real-world attack telemetry
Contribute to a strategic feedback loop by documenting findings
Partner with engineering, product, and other research teams
Explore and prototype with emerging AI tools and frameworks
Requirements:
Must have at least 3 additional semesters before graduation – graduation date Summer 27 or later
Available to work 3 days a week
Proven hands-on experience in security research, threat hunting, or detection engineering roles
Proficiency in Python, C#, or similar languages
Hands-on experience with AI technologies
Nice to have:
Currently pursuing a Bachelor's or Masters Degree in Statistics, Mathematics, Computer Science, Data Science, AI/Machine Learning, or related field
Deep understanding of the modern threat landscape
Previous experience reasoning over large-scale datasets using big-data query languages
A proven Hunter mindset
Experience with LLMs, prompt engineering, or agentic AI frameworks
Interest in the intersection of AI and adversarial behavior