This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Senior Director, Applied Research. Overview: At Capital One, we are creating trustworthy and reliable AI systems, changing banking for good. For years, Capital One has been leading the industry in using machine learning to create real-time, intelligent, automated customer experiences. From informing customers about unusual charges to answering their questions in real time, our applications of AI & ML are bringing humanity and simplicity to banking. We are committed to building world-class applied science and engineering teams and continue our industry leading capabilities with breakthrough product experiences and scalable, high-performance AI infrastructure. At Capital One, you will help bring the transformative power of emerging AI capabilities to reimagine how we serve our customers and businesses who have come to love the products and services we build.
Job Responsibility:
Partner with a cross-functional team of data scientists, software engineers, machine learning engineers and product managers to deliver AI-powered products that change how customers interact with their money
Leverage a broad stack of technologies — Pytorch, AWS Ultraclusters, Huggingface, Lightning, VectorDBs, and more — to reveal the insights hidden within huge volumes of numeric and textual data
Build AI foundation models through all phases of development, from design through training, evaluation, validation, and implementation
Engage in high impact applied research to take the latest AI developments and push them into the next generation of customer experiences
Flex your interpersonal skills to translate the complexity of your work into tangible business goals
Requirements:
PhD in Electrical Engineering, Computer Engineering, Computer Science, AI, Mathematics, or related fields plus 6 years of experience in Applied Research or M.S. in Electrical Engineering, Computer Engineering, Computer Science, AI, Mathematics, or related fields plus 8 years of experience in Applied Research
At least 5 years of people leadership experience
Nice to have:
PhD in Computer Science, Machine Learning, Computer Engineering, Applied Mathematics, Electrical Engineering or related fields
LLM
PhD focus on NLP or Masters with 10 years of industrial NLP research experience
Core contributor to team that has trained a large language model from scratch (10B + parameters, 500B+ tokens)
Numerous publications at ACL, NAACL and EMNLP, Neurips, ICML or ICLR on topics related to the pre-training of large language models
Has worked on an LLM (open source or commercial) that is currently available for use
Demonstrated ability to guide the technical direction of a large-scale model training team
Experience working with 500+ node clusters of GPUs
Has worked on LLM scaled to 70B parameters and 1T+ tokens
Experience with common training optimization frameworks (deep speed, nemo)
PhD focus on topics in geometric deep learning (Graph Neural Networks, Sequential Models, Multivariate Time Series)
Member of technical leadership for model deployment for a very large user behavior model
Multiple papers on topics relevant to training models on graph and sequential data structures at KDD, ICML, NeurIPs, ICLR
Worked on scaling graph models to greater than 50m nodes
Experience with large scale deep learning based recommender systems
Experience with production real-time and streaming environments
Contributions to common open source frameworks (pytorch-geometric, DGL)
Proposed new methods for inference or representation learning on graphs or sequences
Worked datasets with 100m+ users
PhD focused on topics related to optimizing training of very large language models
5+ years of experience and/or publications on one of the following topics: Model Sparsification, Quantization, Training Parallelism/Partitioning Design, Gradient Checkpointing, Model Compression
PhD focused on topics related to guiding LLMs with further tasks (Supervised Finetuning, Instruction-Tuning, Dialogue-Finetuning, Parameter Tuning)
Demonstrated knowledge of principles of transfer learning, model adaptation and model guidance
Experience deploying a fine-tuned large language model
Numerous Publications studying tokenization, data quality, dataset curation, or labeling
Leading contributions to one or more large open source corpus (1 Trillion + tokens)
Core contributor to open source libraries for data quality, dataset curation, or labeling