This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
About the Team Copilot Security builds the foundations that make Microsoft's AI experiences trusted, resilient, and safe. We design and implement security capabilities that protect users across Windows, Edge, web, mobile, and third-party ecosystems. Our work spans secure identity flows, defenses against threats like prompt injection, and privacy-first systems that scale globally. About the Role Copilot is entering a new era of agentic AI, where intelligent agents take actions on behalf of users. We're looking for a Software Engineer II with solid fundamentals and high growth potential—someone who can quickly deepen their expertise in AI-driven security and expand their ownership over time. You'll contribute to secure orchestration frameworks, AI-powered defenses, and the core systems that ensure Copilot's actions remain trustworthy. This role is ideal for engineers who enjoy solving complex technical problems, learning new AI-driven patterns, and building secure, scalable systems that balance innovation with user trust. Why This Role Matters Your work will directly shape how hundreds of millions of users experience safe, trustworthy, and innovative AI. You'll be at the forefront of defining how agentic AI can proactively defend users, mitigate emerging threats, and unlock new secure scenarios, making a global impact on Microsoft's most transformative products.
Job Responsibility:
Build and ship security features that protect Copilot from threats such as prompt injection, adversarial manipulation, and unsafe agentic workflows
Implement secure orchestration components that allow Copilot to safely delegate and execute actions across devices, services, and platforms
Contribute to developing intelligent agents that apply information-flow reasoning, guardrails, and common-sense constraints for security and privacy
Collaborate with partner teams across engineering, product, security, privacy, and AI to adopt secure agentic patterns and best practices
Instrument and monitor key metrics for agentic AI security, using data to improve reliability, safety, and user trust
Write clear documentation for secure agentic patterns, including safe-delegation guidelines and emerging risk considerations
Demonstrate high growth potential by progressively expanding technical scope, autonomy, and ownership as you gain experience with agentic AI and security systems.
Requirements:
Bachelor's Degree in Computer Science or related technical field AND 2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.
Nice to have:
Master's Degree in Computer Science or related technical field AND 3+ years technical engineering experience
Experience building production-quality software systems
1-2+ years building or operating large-scale distributed systems or services
Experience working on security-critical, privacy-sensitive, or AI-powered systems
Familiarity with agentic AI concepts such as tool calling, orchestration, or multi-agent workflows
Experience with modern cloud development, containerization (Docker, Kubernetes), or distributed compute frameworks
Exposure to evaluation or observability tooling for LLM-based applications (e.g., LangFuse, MLFlow, Phoenix) or interest in learning these systems
Ability to communicate technical concepts clearly and collaborate effectively across teams
Demonstrated high growth potential, with solid learning velocity and the ability to quickly take on broader areas of ownership
Growth mindset with interest in developing deeper expertise in AI security, orchestration, and emerging threat models.