This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Copilot is evolving into an agentic system that can plan, reason, and execute actions across tools, data, and services. Securing such a system cannot rely on static controls, offline review, or policy‑only enforcement. It requires runtime defenses that adapt to intent, behavior, and context as the system operates. Copilot Security and Privacy is responsible for building these defenses directly into Copilot. Our work focuses on new security primitives for agentic AI, including runtime misuse detection, adaptive guardrails, containment and isolation mechanisms, and feedback‑driven control systems informed by offensive security research. We are hiring a Principal Technical Program Manager (TPM) to own the end‑to‑end delivery of these capabilities. This is a deeply technical execution role for someone who can operate at the boundary of security engineering, AI research, and platform systems—turning ambiguous threat models into shippable, operable defenses deployed in a globally scaled AI product.
Job Responsibility:
Own Delivery of In‑Product AI Threat Defenses
Translate Threat Models into Executable Systems
Drive Cross‑Cutting Technical Execution
Ensure Operability at Runtime
Requirements:
Bachelor's Degree AND 6+ years experience in engineering, product/technical program management, data analysis, or product development OR equivalent experience
3+ years of experience managing cross-functional and/or cross-team projects
Proven ability to lead execution in high‑ambiguity environments where requirements, threats, and system behavior evolve rapidly
Solid systems thinking: ability to reason about execution paths, failure modes, and adversarial behavior
Track record of making sound technical tradeoffs and shipping durable solutions without relying on heavy process
Background in security engineering, distributed systems, applied research, or ML systems prior to or alongside TPM work
Experience delivering runtime detection, abuse prevention, or adaptive enforcement systems
Familiarity with agentic AI systems, LLM‑based products, or non‑deterministic execution environments
Experience partnering closely with offensive security or red‑team functions
Demonstrated ability to translate research, prototypes, or threat models into production‑grade systems
Solid analytical skills, including working with telemetry, signals, and feedback loops
Nice to have:
Bachelor's Degree AND 12+ years experience engineering, product/technical program management, data analysis, or product development OR equivalent experience