This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We’re building a world of health around every individual — shaping a more connected, convenient and compassionate health experience. At CVS Health®, you’ll be surrounded by passionate colleagues who care deeply, innovate with purpose, hold ourselves accountable and prioritize safety and quality in everything we do. Join us and be part of something bigger – helping to simplify health care one person, one family and one community at a time.
Job Responsibility:
Lead development and enforcement of application and AI security policies, standards, and guardrails, embedding security-by-design across both traditional and AI-driven systems
Establish secure design patterns for AI agent frameworks, covering prompt management, tool invocation, memory handling, autonomy boundaries, and escalation controls
Promote organization-wide awareness of AI-specific risks such as model misuse, prompt injection, data leakage, and unsafe agent behavior
Serve as the principal SME for securing AI-enabled applications and agentic system architectures
Architect and review secure designs for systems leveraging LLMs/foundation models, autonomous and semi-autonomous agents, RAG pipelines, and tool‑using or decision‑making workflows
Define identity, authorization, data access, and observability controls specific to agentic environments while partnering closely with AI platform, product, and data teams to ensure responsible AI delivery
Influence engineering and product teams to integrate secure engineering practices and align security with compliance, privacy, and responsible AI initiatives
Advise senior leadership on AI security implications, architectural decisions, and long-term strategy while shaping roadmaps that anticipate emerging AI threats and regulatory requirements
Lead advanced security testing and risk assessments for AI-enabled systems, including threat modeling of agent workflows, abuse/misuse analysis, and secure design reviews of AI pipelines
Evaluate and guide adoption of new AI security tools, ensuring protections maintain confidentiality, integrity, availability, and responsible data use
Provide senior technical leadership during incidents involving application or AI systems, guiding response strategies for misuse, data exposure, and autonomous failures
Translate operational learnings into improved security architecture, controls, and system resilience
Mentor senior and principal engineers to elevate security maturity across the organization
Contribute to research and evaluation of emerging AI security practices and play a key role in shaping the long-term application and AI security roadmap, advocating for security as a strategic accelerator for AI adoption
Requirements:
10+ years of experience designing, building, and securing large-scale applications and platforms
7+ years of expertise in application security, including threat modeling, secure design, and vulnerability management
7+ years of programming experience in one or more languages such as Python, Java, JavaScript, C#, or Go
5+ years of experience of developing and securing AI and ML workloads, with recent experience in generative AI and agentic workloads
5+ years of experience public cloud platforms (AWS, Azure, and/or GCP) and modern application architectures
3+ years of experience with containerized, serverless, and microservice-based architectures
Nice to have:
Hands-on experience securing AI agents, RAG pipelines, and tool-using LLM systems
Proven ability to lead complex security initiatives from concept through enterprise-scale adoption
Familiarity with AI governance, responsible AI principles, and emerging AI security standards
Experience integrating security controls into CI/CD pipelines for AI and application workloads
Strong understanding of compliance frameworks (PCI, HIPAA, NIST, HITRUST, CSA)
Experience influencing security strategy beyond a single team, including enterprise or platform-level impact
Contributions to security research, open-source projects, or industry communities