This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As a Capital One Machine Learning Engineer (MLE) on the GenAI Workflows Serving team, you'll be part of an Agile team dedicated to designing, building, and productionizing Generative AI applications and Agentic Workflow systems at massive scale. You’ll participate in the detailed technical design, development, and implementation of complex machine learning applications leveraging cloud-native platforms. You’ll focus on building robust ML serving architecture, developing high-performance application code, and ensuring the high availability, security, and low latency of our Generative AI solutions. You will collaborate closely with multiple other AI/ML teams to drive innovation and continuously apply the latest innovations and best practices in machine learning engineering.
Job Responsibility:
Design, build, and deliver GenAI models and components that solve complex business problems, while working in collaboration with the Product and Data Science teams
Design and implement cloud-native ML Serving Platforms leveraging technologies like Docker, Kubernetes, KNative, and KServe to ensure optimized and scalable deployment of models
Solve complex scaling and high-availability problems by writing and testing performant application code in Python and Go-lang, developing and validating ML models, and automating tests and deployment
Implement advanced MLOps and GitOps practices for continuous integration and continuous deployment (CI/CD) using tools like ArgoCD to manage the entire lifecycle of models and applications
Leverage service mesh architectures like Istio to manage traffic, enhance security, and ensure resilience for high-volume serving endpoints
Retrain, maintain, and monitor models in production
Construct optimized, scalable data pipelines to feed ML models
Ensure all code is well-managed to reduce vulnerabilities, models are well-governed from a risk perspective, and the ML follows best practices in Responsible and Explainable AI
Use programming languages like Python, Go, Scala or Java
Requirements:
Bachelor's Degree
At least 6 years of experience designing and building data-intensive solutions using distributed computing (Internship experience does not apply)
At least 4 years of experience programming with Python, Scala, Go or Java
At least 2 years of experience building, scaling, and optimizing ML systems
Nice to have:
Master's or Doctoral Degree in computer science, electrical engineering, mathematics, or a similar field
3+ years of experience building production-ready data pipelines that feed ML models
3+ years of on-the-job experience with an industry recognized ML framework such as scikit-learn, PyTorch, Dask, Spark, or TensorFlow
2+ years of experience developing performant, resilient, and maintainable code
2+ years of experience with data gathering and preparation for ML models
2+ years of people leader experience
1+ years of experience leading teams developing ML solutions using industry best practices, patterns, and automation
Experience developing and deploying ML solutions in a public cloud such as AWS, Azure, or Google Cloud Platform
Experience designing, implementing, and scaling complex data pipelines for ML models and evaluating their performance
ML industry impact through conference presentations, papers, blog posts, open source contributions, or patents
What we offer:
performance based incentive compensation, which may include cash bonus(es) and/or long term incentives (LTI)
comprehensive, competitive, and inclusive set of health, financial and other benefits