This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
At Capital One, we are creating trustworthy and reliable AI systems, changing banking for good. For years, Capital One has been leading the industry in using machine learning to create real-time, intelligent, automated customer experiences. From informing customers about unusual charges to answering their questions in real time, our applications of AI & ML are bringing humanity and simplicity to banking. We are committed to building world-class applied science and engineering teams and continue our industry leading capabilities with breakthrough product experiences and scalable, high-performance AI infrastructure. At Capital One, you will help bring the transformative power of emerging AI capabilities to reimagine how we serve our customers and businesses who have come to love the products and services we build. The AI Foundations team is at the center of bringing our vision for AI at Capital One to life. Our work touches every aspect of the research life cycle, from partnering with Academia to building production systems. We work with product, technology and business leaders to apply the state of the art in AI to our business. This is an individual contributor (IC) role driving strategic direction through collaboration with Applied Science, Engineering and Product leaders across Capital One. As a well-respected IC leader, you will guide and mentor a team of applied scientists and their managers without being a direct people leader. You will be expected to be an external leader representing Capital One in the research community, collaborating with prominent faculty members in the relevant AI research community.
Job Responsibility:
Partner with a cross-functional team of data scientists, software engineers, machine learning engineers and product managers to deliver AI-powered products that change how customers interact with their money
Leverage a broad stack of technologies — Pytorch, AWS Ultraclusters, Huggingface, Lightning, VectorDBs, and more — to reveal the insights hidden within huge volumes of numeric and textual data
Build AI foundation models through all phases of development, from design through training, evaluation, validation, and implementation
Engage in high impact applied research to take the latest AI developments and push them into the next generation of customer experiences
Flex your interpersonal skills to translate the complexity of your work into tangible business goals
Partner with a cross-functional team of scientists, machine learning engineers, software engineers, and product managers to deliver AI-powered platforms and solutions that change how customers interact with their money
Requirements:
PhD in Electrical Engineering, Computer Engineering, Computer Science, AI, Mathematics, or related fields plus 4 years of experience in Applied Research or M.S. in Electrical Engineering, Computer Engineering, Computer Science, AI, Mathematics, or related fields plus 6 years of experience in Applied Research
PhD in Computer Science, Machine Learning, Computer Engineering, Applied Mathematics, Electrical Engineering or related fields
LLM
PhD focus on NLP or Masters with 10 years of industrial NLP research experience
Core contributor to team that has trained a large language model from scratch (10B + parameters, 500B+ tokens) or through continued pre-training, post training pipeline for alignment and reasoning, LLM optimizations, complex reasoning with multi-agentic LLMs
Numerous publications at ACL, NAACL and EMNLP, Neurips, ICML or ICLR on topics related to the pre-training of large language models (e.g. technical reports of pre-trained LLMs, SSL techniques, model pre-training optimization)
Has worked on an LLM (open source or commercial) that is currently available for use
Demonstrated ability to guide the technical direction of a large-scale model training team
Experience with common training optimization frameworks (deep speed, nemo)
Experience contributing to the team that has trained a large language model from scratch (10B + parameters, 500B+ tokens) or through continued pre-training, post training pipeline for alignment and reasoning, LLM optimizations, complex reasoning with multi-agentic LLMs
Experience building large deep learning models, whether on language, images, events, or graphs, as well as expertise in one or more of the following: training optimization, self-supervised learning, robustness, explainability, RLHF
An engineering mindset as shown by a track record of delivering models at scale both in terms of training data and inference volumes
Experience in delivering libraries, platform level code or solution level code to existing products
A professional with a track record of coming up with new ideas or improving upon existing ideas in machine learning, demonstrated by accomplishments such as first author publications or projects
Possess the ability to own and pursue a research agenda, including choosing impactful research problems and autonomously carrying out long-running projects
What we offer:
comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being
performance based incentive compensation, which may include cash bonus(es) and/or long term incentives (LTI)