CrawlJobs Logo

Principal AI Security and Safety Researcher

https://www.microsoft.com/ Logo

Microsoft Corporation

Location Icon

Location:
United States , Redmond

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

139900.00 - 274800.00 USD / Year

Job Description:

Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. Are you a red teamer who is looking to break into the AI field? Do you want to find AI failures in Microsoft’s largest AI systems impacting millions of users? Join Microsoft’s AI Red Team where you'll emulate work alongside security and AI hacking experts to proactively test for failures in Microsoft’s big AI systems. We are looking for a Principal AI Security and Safety Researcher who can be a red teamer dedicated to making AI security better and helping our customers expand with our AI systems. In AI red teaming, you'll apply the newest AI security, frontier harms, and safety research to emulate adversarial hacking on Microsoft’s Ai models, systems, products, and features, advising product teams on how to mitigate risks before technology reaches our customers. This role will also serve as our technical lead in AI frontier harms, such as autonomy, and loss of control of AI systems, uplift in chemistry or biology. Not only will you set cross-harm strategy and advise on implementation of Frontier Mondel Forum and industry best practices within the team, you’ll serve as a coach to the operators leading the individual harm strategies in each area on day-to-day red teaming. In addition to frontier experience, we want AI-obsessed hacker-mindsets to come join our team with leadership and comfort with ambiguity. The Team & Work: Our team is an interdisciplinary group of red teamers, adversarial Machine Learning (ML) researchers, Safety & Responsible AI experts, AI researchers, and software developers with the mission of proactively finding failures across all of Microsoft’s AI portfolio. In this role, you will red team AI models, such as our Phi series and MAI models, and applications, including Bing Copilot, Security Copilot, Github Copilot, Office Copilot and Windows Copilot. This work is sprint based, working with AI Safety, Security, and Product Development teams, to run operations that aim to find safety and security risks before they happen. Our reporting and findings directly inform internal key business decision leadership. This role will also focus on our team’s approach to frontier AI model harms, requiring parallel tracking between Ai red teaming operations and driving strategy and informing industry-level discussions on autonomy, CBRN, harmful manipulation, cyber, and more novel harms. This a fast-moving team with multiple roles and responsibilities within the AI Security and Safety space; people who love to provide agile, practical insights and who enjoy jumping in to solve ambiguous problems excel in this role.

Job Responsibility:

  • Lead cross-domain frontier harms strategy, represent as industry frontier forums, and coach individual operator leads on specific harm areas
  • Discover and exploit GenAI vulnerabilities end-to-end in order to assess the safety of systems
  • Manage product group stakeholders as priority recipients and collaborators for operational sprints
  • Drive clarity on communication and reporting for red teaming peers when working with product groups
  • Work alongside traditional offensive security engineers, adversarial ML experts, developers to land responsible AI operations while creating a culture of positive, inclusive problem solving

Requirements:

  • Doctorate in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 3+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection
  • OR Master's Degree in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 4+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection
  • OR Bachelor's Degree in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 6+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection
  • OR equivalent experience
  • Ability to meet Microsoft, customer and/or government security screening requirements are required for this role
  • This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter

Nice to have:

  • Doctorate in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 5+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection
  • OR Master's Degree in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 8+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection
  • OR Bachelor's Degree in Statistics, Mathematics, Computer Security, or related field AND 12+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection
  • OR equivalent experience
  • Related Fields include: AI Security, AI Safety, Biology AND an applied background, Chemistry AND an applied background, Cybersecurity, Nuclear Physics, Machine Learning, and more

Additional Information:

Job Posted:
February 21, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Principal AI Security and Safety Researcher

Senior Principal Machine Learning Engineer

You’ll form a new team of passionate engineers dedicated to building and scaling...
Location
Location
United States
Salary
Salary:
222300.00 - 348975.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s, Master’s, or PhD in Computer Science, Statistics, Mathematics, or a related field, or equivalent practical experience
  • 12+ years of industry experience in machine learning, data science, or AI, with a proven track record of delivering production-grade ML systems
  • Deep expertise in Python, Go, or Java, with the ability to write performant, production-quality code
  • familiarity with SQL, Spark, and cloud data environments (e.g., AWS, GCP, Databricks)
  • Experience building and scaling ML models for business-critical applications, ideally in security, privacy, anti-abuse, or compliance domains
  • Strong communication skills, able to explain complex ML concepts to diverse audiences and influence stakeholders
  • Demonstrated ability to solve ambiguous, complex problems and drive projects from ideation to production
  • Agile development mindset, with a focus on iterative improvement and business impact
Job Responsibility
Job Responsibility
  • Lead AI/ML Strategy for Trust: Drive the development and implementation of advanced machine learning algorithms and AI systems for Trust, Security, Product Abuse, and Compliance use cases (e.g., threat detection, vulnerability management, privacy automation, AI safety)
  • Architect and Scale ML Platforms: Design and build scalable, secure, and reliable ML infrastructure and pipelines, ensuring compliance with privacy and regulatory requirements
  • AI Safety and Responsible AI: Develop and champion AI safety practices, including output moderation, explainability, and alignment with evolving regulatory frameworks
  • Cross-Functional Collaboration: Partner with product, engineering, security, privacy, and analytics teams to deliver transformative AI/ML solutions that enhance Atlassian’s trust posture
  • Mentorship and Leadership: Mentor and guide ML engineers and data scientists, fostering a culture of technical excellence, innovation, and continuous improvement
  • Innovation and Research: Stay at the forefront of AI/ML research, evaluating and applying the latest techniques (e.g., LLMs, anomaly detection, privacy-preserving ML) to real-world Trust challenges
  • Platform Enablement: Build reusable ML services and APIs that empower other teams to integrate AI/ML into their products and workflows
  • Operational Excellence: Ensure high availability, reliability, and security of all ML-powered Trust platforms and services
What we offer
What we offer
  • health and wellbeing resources
  • paid volunteer days
  • benefits, bonuses, commissions, and equity
  • Fulltime
Read More
Arrow Right
New

Principal Product Manager- AI Integrity

The AI Integrity & Provenance team builds post‑deployment safety, abuse monitori...
Location
Location
United States , Redmond
Salary
Salary:
139900.00 - 274800.00 USD / Year
https://www.microsoft.com/ Logo
Microsoft Corporation
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree AND 8+ years experience in product/program management OR equivalent experience
  • Ability to meet Microsoft, customer and/or government security screening requirements are required for this role
  • Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter
Job Responsibility
Job Responsibility
  • Lead product strategy for AI Integrity Foundations across provenance, abuse monitoring, incident response, and social listening, enabling safe, accountable, and resilient deployment of AI systems and agents at scale
  • Define the long-term vision, strategy, and roadmap for foundational integrity capabilities within Azure AI Foundry, ensuring consistent post-deployment safeguards across models, applications, and agentic workflows
  • Improve abuse monitoring and detection systems that identify and mitigate real-world AI threats and misuse, including prompt injection, jailbreaks, data exfiltration, malicious tool calls, coordinated abuse, model exploitation and other novel vectors
  • Own incident response product capabilities, enabling rapid detection, triage, investigation, and remediation of AI-related safety and security incidents, with clear metrics for MTTR, coverage, and enforcement effectiveness
  • Evolve provenance and content authenticity capabilities, supporting traceability, attribution, auditability, and regulatory requirements for trustworthy AI outputs
  • Partner closely with security engineers, red teams, AI researchers, and integrity analysts to translate emerging attack patterns, abuse signals, and novel harm vectors into durable, productized protections
  • Integrate AI integrity and security capabilities with Microsoft’s broader ecosystem, including Defender (threat detection and response), Entra (identity and access control), and Purview (data protection, governance, and compliance)
  • Drive 0‑to‑1 product development, taking new integrity and safety concepts from early experimentation through production launch, customer adoption, and operational maturity
  • Establish and own metrics and dashboards for AI integrity posture and product success, including detection coverage, signal quality, response effectiveness, customer impact, and regulatory readiness
  • Fulltime
Read More
Arrow Right

Principal Engineer

The Principal AI/ML Operations Engineer leads the architecture, automation, and ...
Location
Location
United States , Pleasanton, California
Salary
Salary:
251000.00 - 314500.00 USD / Year
blackline.com Logo
BlackLine
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Machine Learning, Data Science, or a related field
  • 10+ years in ML infrastructure, DevOps, and software system architecture
  • 4+ years in leading MLOps or AI Ops platforms
  • Strong programming skills in languages such as Python, Java, or Scala
  • Expertise in ML frameworks (TensorFlow, PyTorch, scikit-learn) and orchestration tools (Airflow, Kubeflow, Vertex AI, MLflow)
  • Proven experience operating production pipelines for ML and LLM-based systems across cloud ecosystems (GCP, AWS, Azure)
  • Deep familiarity with LangChain, LangGraph, ADK or similar agentic system runtime management
  • Strong competencies in CI/CD, IaC, and DevSecOps pipelines integrating testing, compliance, and deployment automation
  • Hands-on with observability stacks (Prometheus, Grafana, Newrelic) for model and agent performance tracking
  • Understanding of governance frameworks for Responsible AI, auditability, and cost metering across training and inference workloads
Job Responsibility
Job Responsibility
  • Define enterprise-level standards and reference architectures for ML-Ops and AIOps systems
  • Partner with data science, security, and product teams to set evaluation and governance standards (Guardrails, Bias, Drift, Latency SLAs)
  • Mentor senior engineers and drive design reviews for ML pipelines, model registries, and agentic runtime environments
  • Lead incident response and reliability strategies for ML/AI systems
  • Lead the deployment of AI models and systems in various environments
  • Collaborate with development teams to integrate AI solutions into existing workflows and applications
  • Ensure seamless integration with different platforms and technologies
  • Define and manage MCP Registry for agentic component onboarding, lifecycle versioning, and dependency governance
  • Build CI/CD pipelines automating LLM agent deployment, policy validation, and prompt evaluation of workflows
  • Develop and operationalize experimentation frameworks for agent evaluations, scenario regression, and performance analytics
What we offer
What we offer
  • short-term and long-term incentive programs
  • robust offering of benefit and wellness plans
  • Fulltime
Read More
Arrow Right

Executive Director, Agentic AI

The Executive Director, Agentic AI will define and lead the enterprise strategy,...
Location
Location
United States , Sacramento
Salary
Salary:
175100.00 - 334750.00 USD / Year
https://www.cvshealth.com/ Logo
CVS Health
Expiration Date
May 30, 2026
Flip Icon
Requirements
Requirements
  • 12+ years in software engineering, platforms, or AI/ML, with 5+ years in senior leadership roles
  • Hands-on experience delivering AI systems at enterprise scale (not just experimentation)
  • Deep understanding of: LLMs, SLMs, RAG, embeddings, vector databases
  • Agent frameworks and orchestration patterns
  • Distributed systems, APIs, event-driven architectures
  • Proven ability to operate in regulated, high-availability environments
  • Strong executive communication and stakeholder-management skills
Job Responsibility
Job Responsibility
  • Define the enterprise Agentic AI vision and roadmap, aligned to business outcomes (cost reduction, revenue growth, productivity, experience uplift)
  • Establish clear differentiation between LLM tools, copilots, workflows, and autonomous/multi-agent systems
  • Identify and prioritize high-value agentic use cases (e.g., customer support resolution, claims/prior auth automation, contract leakage reduction, operational orchestration, developer productivity)
  • Own the design and evolution of the Agentic AI Platform, including: Multi-agent frameworks (planner, executor, verifier, critic, retriever agents)
  • Tool/function calling and API orchestration
  • RAG, memory, state management, and context persistence
  • Human-in-the-loop / human-on-the-loop controls
  • Define standards for agent lifecycle management (design, testing, deployment, observability, rollback)
  • Partner with Digital Platform and Integration teams to ensure agents are API-first, event-driven, and scalable
  • Lead delivery of production-grade agentic solutions, not POCs
What we offer
What we offer
  • Affordable medical plan options
  • 401(k) plan (including matching company contributions)
  • Employee stock purchase plan
  • No-cost programs for all colleagues including wellness screenings, tobacco cessation and weight management programs, confidential counseling and financial coaching
  • Paid time off
  • Flexible work schedules
  • Family leave
  • Dependent care resources
  • Colleague assistance programs
  • Tuition assistance
  • Fulltime
Read More
Arrow Right

Principal Product Manager, AI

We are looking for a Lead Product Manager – AI/ML to own the strategy and roadma...
Location
Location
United States , Boston
Salary
Salary:
174000.00 - 256000.00 USD / Year
simplisafe.com Logo
SimpliSafe
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of product management experience
  • At least 3 years shipping AI or ML-driven features at consumer scale
  • Proven ability to define and champion a multi-year technical product vision
  • Sufficient technical depth to work credibly with AI/ML engineers
  • Experience defining quality and success criteria for systems where errors carry real consequences
  • Strong written and verbal communication
  • Comfort operating in ambiguity with incomplete data
  • Experience acting as a strategic peer in a matrixed organization
Job Responsibility
Job Responsibility
  • Define the multi-year strategy and roadmap for SimpliSafe’s AI/ML capabilities layer
  • Translate complex product needs into crisp, actionable requirements for the AI/ML engineering team
  • Establish the quality bar for model performance in production
  • Own platform decisions including model evaluation frameworks, data pipeline architecture, and trade-offs
  • Manage production model health
  • Build and maintain alignment across product, engineering, data science, design, and monitoring operations
  • Drive the strategy for AI safety, governance, and compliance
  • Partner with the Monitoring Product and Operations teams to define AI capability requirements
  • Drive measurable reduction in false alarm rates and response latency
  • Represent AI capability constraints and opportunities clearly
What we offer
What we offer
  • A mission- and values-driven culture and a safe, inclusive environment
  • A comprehensive total rewards package
  • Free SimpliSafe system and professional monitoring for your home
  • Employee Resource Groups (ERGs)
  • Participation in our annual bonus program, equity, and other forms of compensation
  • A full range of medical, retirement, and lifestyle benefits
  • Fulltime
Read More
Arrow Right
New

Principal Product Manager - Microsoft AI and Copilot

Microsoft Copilot is evolving from a chat interface into an intelligent, agentic...
Location
Location
Japan , Tokyo
Salary
Salary:
Not provided
https://www.microsoft.com/ Logo
Microsoft Corporation
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of software product management taking from a user need, a prototype from engineering to market
  • Hands-on experience delivering AI or generative-AI-powered features or products
  • Experience working on enterprise or business-facing products, including IT, security, or operational constraints
  • Experience evaluating product quality using both quantitative metrics and qualitative feedback, including cases where release decisions were adjusted or delayed
  • Proven experience working cross-functionally with engineering, design, research, and business stakeholders
Job Responsibility
Job Responsibility
  • Define Copilot’s Identity & Expression strategy across text, voice, and UI-aware surfaces, including how AI agents express reasoning, confidence, uncertainty, and progress
  • Own Mico as the reference implementation of Copilot Identity & Expression, ensuring it evolves as a platform capability rather than a standalone feature
  • Translate expressive and agentic AI capabilities into clear enterprise value, such as onboarding, workflow guidance, and reduced cognitive load
  • Define enterprise trust models for expressive AI, including governance, admin control, safety constraints, and predictable failure modes
  • Lead AI evaluation strategy for expressive and agentic experiences, defining quality bars beyond accuracy: trust, tone, appropriateness, and user confidence
  • Use Japan as a strategic design and enterprise pilot market, incorporating cultural sensitivity, politeness, and indirect guidance into global Copilot standards
  • Partner closely with engineering, design, research, security, legal, and go-to-market teams across Japan, the US, and China to deliver aligned Copilot experiences
  • Communicate product vision and trade-offs clearly to executive stakeholders, representing Identity & Expression as a core Copilot system
  • Fulltime
Read More
Arrow Right

Principal Product Manager

The Microsoft Discovery and Quantum (MDQ) team is seeking a visionary and techni...
Location
Location
United States , Redmond
Salary
Salary:
139900.00 - 274800.00 USD / Year
https://www.microsoft.com/ Logo
Microsoft Corporation
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree AND 8+ years experience in product/service/program management or software development OR equivalent experience
  • Ability to meet Microsoft, customer and/or government security screening requirements
  • Microsoft Cloud Background Check upon hire/transfer and every two years thereafter
Job Responsibility
Job Responsibility
  • Define and drive the long-term product strategy and roadmap for AI-driven enterprise solutions with a focus on quality, experimentation, and responsible AI
  • Lead cross-functional teams to deliver scalable, secure, and compliant AI products that meet Microsoft’s high standards for trust and customer satisfaction
  • Partner with engineering, research, design, and responsible AI teams to ensure seamless execution and alignment with Microsoft’s AI principles
  • Develop and evolve orchestration platforms for AI agents, including model evaluation frameworks, safety guardrails, and human-in-the-loop systems
  • Drive experimentation and A/B testing strategies to validate product hypotheses and optimize user experiences
  • Engage with enterprise customers, partners, and internal stakeholders to gather insights and translate them into actionable product requirements
  • Influence executive leadership and cross-org stakeholders to align on priorities and drive strategic initiatives across MDQ and Microsoft
  • Mentor and coach senior product managers, contributing to a culture of innovation, inclusion, and excellence
  • Represent MDQ in industry forums, customer briefings, and regulatory discussions related to AI quality and enterprise technology
  • Embody our Culture and Values
  • Fulltime
Read More
Arrow Right

Principal Research Engineer

As a Principal Research Engineer at Microsoft, you will set the technical vision...
Location
Location
United States , Redmond
Salary
Salary:
163000.00 - 296400.00 USD / Year
https://www.microsoft.com/ Logo
Microsoft Corporation
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python
  • OR equivalent experience
  • Ability to meet Microsoft, customer and/or government security screening requirements
  • Microsoft Cloud Background Check
Job Responsibility
Job Responsibility
  • Define and execute technical strategy for foundational models, multi-agent systems, and next-generation Copilot experiences, especially within Business & Industry Copilot
  • Lead cross-team efforts to deliver scalable, reliable, and responsible AI systems
  • Advance the state of the art and translate breakthroughs into measurable customer and business impact
  • Architect and deliver complex AI systems across model development, data, infra, evaluation, and deployment spanning multiple product lines
  • Set technical direction for large programs
  • drive alignment across Research, Engineering, and Product
  • Integrate LLMs, multimodal models, multi-agent architectures, and RAG into Microsoft’s ecosystem
  • Establish best practices for MLOps, governance, and Responsible AI, compliant with Microsoft principles and industry standards
  • Drive original research and thought leadership (whitepapers, internal notes, patents)
  • convert insights into shipped capabilities
  • Fulltime
Read More
Arrow Right