This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Copilot Security is at the core of Microsoft’s mission to deliver trusted, human-centered AI experiences. We make security and resilience intrinsic to every Copilot interaction across devices, platforms, and ecosystems. Our work spans secure identity flows, defenses against emerging threats like prompt injection, and privacy-first systems that scale globally. Copilot for consumers is entering a new era of agentic AI, where intelligent agents act on behalf of users across Windows, Edge, web, mobile, and third-party products. We’re seeking a Senior Software Engineer to help develop security features and solutions that harness agentic AI to protect customers and enable new capabilities in Copilot. You’ll contribute to designing and building AI-powered defenses, secure orchestration frameworks, and enabling technologies that empower Copilot to act safely and responsibly at scale. This role is ideal for engineers who are passionate about applying technical skills to solve security challenges and build systems that balance innovation with trust.
Job Responsibility:
Develop and ship agentic AI-powered security features that protect users from threats such as prompt injection, adversarial manipulation, and abuse of agentic workflows.
Implement secure orchestration frameworks that enable Copilot to safely delegate, coordinate, and execute actions across devices, services, and platforms.
Invent and apply new intelligent agents that leverage information flow analysis and apply common sense and judgement guardrails for security and privacy.
Collaborate with product, engineering, security, privacy, and AI teams to adopt agentic security patterns and best practices across Copilot and MAI.
Monitor key metrics for agentic AI security and innovation, using data-driven insights to improve defenses and enablement.
Document secure agentic AI patterns, ensuring they address novel risks, support safe delegation, and enable responsible orchestration of actions.
Requirements:
Bachelor's Degree in Computer Science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience.
3+ years in technical engineering roles building large-scale services.
Hands-on experience designing and operating security-critical or AI-powered systems at scale, including agentic AI, secure orchestration, or advanced threat defenses.
Proven ability to design, build, and ship agentic AI features or frameworks.
Ability to clearly explain complex systems and security concepts to technical and non-technical stakeholders and influence cross-org roadmaps.
Agentic AI Development & Orchestration: Experience building production agent systems using frameworks such as LangGraph, Amazon Strands SDK, or similar platforms
familiarity with agentic design patterns including tool calling, multi-agent coordination, and secure delegation patterns.
Hands-on experience with distributed training frameworks (Ray, Slurm, HPC), containerization and orchestration technologies (Docker, Kubernetes) for ML model deployment, and ML lifecycle management in production environments.
Experience designing evaluation frameworks for LLM-based applications and implementing observability for agent systems using tools such as Phoenix, MLFlow, LangFuse, or custom eval harnesses
understanding of AI safety evaluation methodologies including adversarial testing and red-teaming.
Experience integrating with Azure AI services, Azure OpenAI Service, or Microsoft security platforms (Azure AD, Defender, Purview).
Track record of mentoring less experienced engineers, driving adoption of standards and best practices across teams, and influencing technical roadmaps while balancing innovation velocity with fundamentals.