CrawlJobs Logo

Tech Lead, Data Pipeline

wayve.ai Logo

Wayve

Location Icon

Location:
United Kingdom , London

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

As the Technical Lead for Data Pipeline Features within our Model Development Platform, you will play a pivotal role in Wayve's mission to revolutionize autonomous driving. Each day, our teams handle and process multiple petabytes of data collected from our fleet of autonomous vehicles and critical partner integrations. Your technical leadership will directly impact our ability to transform vast amounts of raw data from diverse internal and external sources into structured, actionable insights that power advanced machine learning models and groundbreaking research. By continuously innovating our data ingestion and processing pipelines, you'll help accelerate the development of safer, more efficient autonomous driving technologies, enabling Wayve to maintain its position at the forefront of machine learning innovation.

Job Responsibility:

  • Define and execute a strategic technical roadmap for enhancing and scaling data pipeline capabilities
  • Drive innovation in pipeline architecture to support dynamic and evolving use-cases
  • Lead the design and implementation of advanced data pipeline features for efficient data ingestion, transformation, and distribution
  • Normalize and unify multiple disparate data sources - including data ingested from external partners - into a consistent and optimized format tailored specifically for AI training and research needs
  • Collaborate closely with robotics, ML engineering, and research teams to continuously enhance pipeline performance and functionality
  • Develop robust interfaces and systems to reduce bottlenecks and improve reliability and scalability
  • Establish and maintain best practices for pipeline reliability, including comprehensive observability, alerting, and monitoring systems
  • Manage initiatives aimed at minimizing pipeline latency, failure recovery, and ensuring compliance with defined service-level agreements (SLAs)
  • Engage actively with cross-functional teams (robotics, ML, data governance) to ensure alignment of technical efforts with overall business goals
  • Foster a culture of transparency, collaboration, and shared ownership across teams
  • Mentor and develop engineering talent, promoting professional growth and technical excellence within your team
  • Contribute to the hiring and onboarding processes to expand the capabilities of the data pipeline engineering team

Requirements:

  • Strong experience (8+ years) in software engineering, specifically focused on developing scalable, complex data pipelines
  • Proven technical leadership experience in a pipeline engineering or related domain
  • Expertise in modern data pipeline architectures, including DAG-based orchestration (e.g., Airflow, Flyte)
  • Solid understanding of data engineering practices, distributed processing frameworks, and pipeline optimization techniques
  • Excellent communication and collaborative skills, capable of working effectively with interdisciplinary teams
  • Track record of mentorship and talent development
  • Bachelor's degree or higher in Computer Science, Engineering, or related technical discipline

Nice to have:

  • Experience with robotics or autonomous vehicle sensor data processing pipelines
  • Familiarity with third-party dataset ingestion and transformation
  • Understanding of compliance and data governance frameworks (e.g., GDPR, TSAX)
  • Hands-on experience integrating observability and monitoring solutions

Additional Information:

Job Posted:
January 10, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Tech Lead, Data Pipeline

Senior Backend Engineer / Tech Lead (Data Management)

As a Senior Backend Software Engineer at Aignostics, you work hand in hand with ...
Location
Location
Germany , Berlin
Salary
Salary:
Not provided
aignostics.com Logo
Aignostics
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor and/or Master in a relevant field or extensive work experience
  • 6+ years of software development experience in a data intensive environment
  • Experience leading a technical initiative ideally with cross-team impact
  • Strong background in software development ideally with Python
  • Experience with cloud providers (GCP, AWS) and their services
  • Experience with container orchestration (preferable Kubernetes)
  • Experience with database systems
  • Familiar with CI/CD pipelines, code reviews and other standards to keep up code quality
  • Driven self-starter, well-organized, excellent communication skills and a strong team player
Job Responsibility
Job Responsibility
  • Design and develop services and core libraries that enable our SaaS platform
  • Ensure reliable, high throughput access to our data for machine learning
  • Maintain and expand our data management infrastructure
  • Lead initiatives, evaluate new technologies and their integration into our current codebase
  • Eagerness to take ownership - from inception to completion - without losing focus on the business context
  • Communicate closely with our frontend and machine learning teams
  • Perform code reviews, considering readability, design and performance
What we offer
What we offer
  • Learning & Development yearly budget of 1,000€ (plus 2 L&D days)
  • Language classes and internal development programs
  • Mentoring program
  • Flexible working hours and teleworking policy
  • 30 paid vacations days per year
  • Family & pet friendly and support flexible parental leave options
  • Subsidized membership of your choice among public transport, sports and well-being
  • Social gatherings, lunches, and off-site events
  • Optional company pension scheme
Read More
Arrow Right

Tech Lead – Scala/Spark

We are seeking a Spark, Big Data - ETL Tech Lead for Commercial Card’s Global Da...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or master’s degree in computer science, Information Technology, or equivalent
  • Minimum 10 years of Proven experience in developing and managing big data solutions using Apache Spark. Having strong hold on Spark-core, Spark-SQL & Spark Streaming
  • Minimum 6 years of experience in leading globally distributed teams successfully
  • Strong programming skills in Scala, Java, or Python
  • Hands on experience on Technologies like Apache Hive, Apache Kafka, HBase, Couchbase, Sqoop, Flume etc.
  • Proficiency in SQL and experience with relational (Oracle/PL-SQL) and NoSQL databases like mongoDB
  • Demonstrated people and technical management skills
  • Demonstrated excellent software development skills. Strong experiences in implementation of complex file transformations like positional, xmls
  • Experience in building enterprise system with focus on recovery, stability, reliability, scalability and performance
  • Experience in working on Kafka, JMS / MQ applications
Job Responsibility
Job Responsibility
  • Lead the design and implementation of large-scale data processing pipelines using Apache Spark on BigData Hadoop Platform
  • Develop and optimize Spark applications for performance and scalability
  • Responsible for providing technical leadership of multiple large scale/complex global software solutions
  • Integrate data from various sources, including Couchbase, Snowflake, and HBase, ensuring data quality and consistency
  • Experience of developing teams of permanent employees and vendors from 5 – 15 developers in size
  • Build and sustain strong relationships with the senior business leaders associated with the platform
  • Design, code, test, document and implement application release projects as part of development team
  • Work with onsite development partners to ensure design and coding best practices
  • Work closely with Program Management and Quality Control teams to deliver quality software to agreed project schedules
  • Proactively notify Development Project Manager of risks, bottlenecks, problems, issues, and concerns
  • Fulltime
Read More
Arrow Right

Generative AI Tech Lead

Provectus is an AI-first consultancy that helps global enterprises adopt Machine...
Location
Location
Salary
Salary:
Not provided
provectus.com Logo
Provectus
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of hands-on experience in Machine Learning, Deep Learning, or NLP
  • 2+ years in a technical leadership or team lead role
  • Strong expertise with LLMs (Hugging Face, OpenAI, Anthropic) and modern NLP stacks
  • Strong hands-on experience with AWS ML ecosystem (SageMaker, Bedrock, Lambda, S3, ECS/ECR)
  • Excellent Python engineering skills and proficiency with PyTorch or TensorFlow
  • Experience building ML systems in production, not just research
  • Solid knowledge of MLOps/LLMOps tools, pipelines, and deployment best practices
  • Strong architectural thinking and ability to design scalable ML systems
  • Excellent communication skills and ability to lead cross-functional teams
  • Passion for mentoring engineers and raising the technical bar
Job Responsibility
Job Responsibility
  • Lead, mentor, and grow a team of 5–10 ML, Data, and Software Engineers
  • Define and drive the technical roadmap for ML/AI initiatives
  • Foster a high-performance culture focused on ownership, learning, and engineering excellence
  • Work closely with Product, Data, and Platform teams to deliver end-to-end AI systems
  • Design, fine-tune, and deploy LLMs and ML models for real production use cases
  • Build systems for RAG, summarization, text generation, entity extraction, and other NLP/LLM workflows
  • Explore and implement emerging GenAI/LLM techniques and infrastructure
  • Contribute across the ML stack: NLP, deep learning, CV, RL, and classical ML
  • Architect and operate scalable ML/AI systems using AWS (SageMaker, Bedrock, Lambda, S3, ECS/ECR…)
  • Optimize model training, inference pipelines, and data workflows for scale, cost, and latency
What we offer
What we offer
  • Sing-up bonus
  • 10% Annual bonus
  • Comprehensive private medical insurance or budget for your medical needs
  • Paid sick leave, vacation, and public holidays
  • Continuous learning support, including unlimited AWS certification sponsorship
Read More
Arrow Right

ML Tech Lead, GenAI

Provectus helps companies adopt ML/AI to transform the ways they operate, compet...
Location
Location
Salary
Salary:
Not provided
provectus.com Logo
Provectus
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience with LLMs and NLP frameworks (e.g., Hugging Face, OpenAI, or Anthropic models)
  • Strong expertise in AWS Cloud Services
  • Strong experience in ML/AI, including at least 2 years in a leadership role
  • Hands-on experience with Python, TensorFlow/PyTorch, and model optimization
  • Familiarity with MLOps tools and best practices
  • Excellent problem-solving and decision-making abilities
  • Strong communication skills and the ability to lead cross-functional teams
  • Passion for mentoring and developing engineers
Job Responsibility
Job Responsibility
  • Lead and manage a team of 5-10 engineers, providing mentorship and fostering a collaborative team environment
  • Drive the roadmap for machine learning projects aligned with business goals
  • Coordinate cross-functional efforts with product, data, and engineering teams to ensure seamless delivery
  • Design, develop, and fine-tune LLMs and other machine learning models to solve business problems
  • Evaluate and implement state-of-the-art LLM techniques for NLP tasks such as text generation, summarization, and entity extraction
  • Stay ahead of advancements in LLMs and apply emerging technologies
  • Expertise in multiple main fields of ML: NLP, Computer Vision, RL, deep learning and classical ML
  • Architect and manage scalable ML solutions using AWS services (e.g., SageMaker, Lambda, Bedrock, S3, ECS, ECR, etc.)
  • Optimize models and data pipelines for performance, scalability, and cost-efficiency in AWS
  • Ensure best practices in security, monitoring, and compliance within the cloud infrastructure
Read More
Arrow Right

Machine Learning Tech Lead, GenAI

Provectus helps companies adopt ML/AI to transform the ways they operate, compet...
Location
Location
Salary
Salary:
8000.00 USD / Month
provectus.com Logo
Provectus
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience with LLMs and NLP frameworks (e.g., Hugging Face, OpenAI, or Anthropic models)
  • Strong expertise in AWS Cloud Services
  • Strong experience in ML/AI, including at least 2 years in a leadership role
  • Hands-on experience with Python, TensorFlow/PyTorch, and model optimization
  • Familiarity with MLOps tools and best practices
  • Excellent problem-solving and decision-making abilities
  • Strong communication skills and the ability to lead cross-functional teams
  • Passion for mentoring and developing engineers
Job Responsibility
Job Responsibility
  • Lead and manage a team of 5-10 engineers, providing mentorship and fostering a collaborative team environment
  • Drive the roadmap for machine learning projects aligned with business goals
  • Coordinate cross-functional efforts with product, data, and engineering teams to ensure seamless delivery
  • Design, develop, and fine-tune LLMs and other machine learning models to solve business problems
  • Evaluate and implement state-of-the-art LLM techniques for NLP tasks such as text generation, summarization, and entity extraction
  • Stay ahead of advancements in LLMs and apply emerging technologies
  • Expertise in multiple main fields of ML: NLP, Computer Vision, RL, deep learning and classical ML
  • Architect and manage scalable ML solutions using AWS services (e.g., SageMaker, Lambda, Bedrock, S3, ECS, ECR, etc.)
  • Optimize models and data pipelines for performance, scalability, and cost-efficiency in AWS
  • Ensure best practices in security, monitoring, and compliance within the cloud infrastructure
What we offer
What we offer
  • Annual Performance Bonus: 10% of base salary
  • Sign-In Bonus: USD 8,000 — paid in two installments: 50% upon start (USD 4,000) 50% upon completion of trial period (USD 4,000)
Read More
Arrow Right

Apps Dev Tech Lead Analyst

Citi is embarking on a multi-year technology initiative in Wholesale Lending Cre...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10-12 years of experience in industry of software development (Java, Sprint Boot)
  • Good Knowledge of Spring including Spring Framework, Spring Boot, Spring Security, Spring Web, Spring Data
  • Expert Hands-on Knowledge of: Threading, Collections, Exception Handling, JDBC, Java OOD/OOP Concepts, GoF Design Patterns, MoM and SOA Design Patterns, File I/O, and parsing XML and JSON, delimited files and fixed length files, String matching, parsing, building, working with binary data / byte arrays
  • Good knowledge of SQL (Oracle dialect is preferable)
  • Experience working with SOA & Micro-services utilizing REST
  • Experience with design and implementations of cloud-ready applications and deployment pipelines on large-scale container platform clusters is a plus
  • Experience working in a Continuous Integration and Continuous Delivery environment and familiar with Tekton, Harness, Jenkins, Code Quality, etc.
  • Proficient in industry standard best practices such as Design Patterns, Coding Standards, Coding modularity, Prototypes etc.
  • Experience in debugging, tuning and optimizing components
  • Understanding of the SDLC lifecycle for Agile & Waterfall methodologies
Job Responsibility
Job Responsibility
  • Expert Hands-on Lead - Writes good quality code in Java, Sprint Boot (related stack)
  • Expert on JUnit, Mockito, Integration Tests and Performance Tests
  • Proficient in Mongo DB and Redis Caching
  • Sound technical Design & Architecture skills, expert in implementing appropriate design patterns
  • Sound Analytic and problem-solving skills
  • Good Experience on performance tuning – should be able to use required tools effectively to find the root causes
  • Good Experience on taking full end to end ownership of developing cloud native micro services
  • Ability to effectively interact, collaborate with development team
  • Ability to effectively communicate development progress to the Global Leads & senior management
  • Work with leads onshore, offshore and matrix teams to implement a business solution
  • Fulltime
Read More
Arrow Right

Senior Data & AI/ML Engineer - GCP Specialization Lead

We are on a bold mission to create the best software services offering in the wo...
Location
Location
United States , Menlo Park
Salary
Salary:
Not provided
techjays.com Logo
techjays
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • GCP Services: BigQuery, Dataflow, Pub/Sub, Vertex AI
  • ML Engineering: End-to-end ML pipelines using Vertex AI / Kubeflow
  • Programming: Python & SQL
  • MLOps: CI/CD for ML, Model deployment & monitoring
  • Infrastructure-as-Code: Terraform
  • Data Engineering: ETL/ELT, real-time & batch pipelines
  • AI/ML Tools: TensorFlow, scikit-learn, XGBoost
  • Min Experience: 10+ Years
Job Responsibility
Job Responsibility
  • Design and implement data architectures for real-time and batch pipelines, leveraging GCP services such as BigQuery, Dataflow, Dataproc, Pub/Sub, Vertex AI, and Cloud Storage
  • Lead the development of ML pipelines, from feature engineering to model training and deployment using Vertex AI, AI Platform, and Kubeflow Pipelines
  • Collaborate with data scientists to operationalize ML models and support MLOps practices using Cloud Functions, CI/CD, and Model Registry
  • Define and implement data governance, lineage, monitoring, and quality frameworks
  • Build and document GCP-native solutions and architectures that can be used for case studies and specialization submissions
  • Lead client-facing PoCs or MVPs to showcase AI/ML capabilities using GCP
  • Contribute to building repeatable solution accelerators in Data & AI/ML
  • Work with the leadership team to align with Google Cloud Partner Program metrics
  • Mentor engineers and data scientists toward achieving GCP certifications, especially in Data Engineering and Machine Learning
  • Organize and lead internal GCP AI/ML enablement sessions
What we offer
What we offer
  • Best in class packages
  • Paid holidays and flexible paid time away
  • Casual dress code & flexible working environment
  • Medical Insurance covering self & family up to 4 lakhs per person
Read More
Arrow Right

Data Project Manager

As a Technical Project Manager (Data Engineering), you will play a key role in e...
Location
Location
Argentina , Buenos Aires
Salary
Salary:
Not provided
https://feverup.com/fe Logo
Fever
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Excellent command of English is essential
  • Spanish will be the primary working language
  • Solid experience managing data or analytics projects end-to-end, ideally in a fast-paced tech, SaaS, or consulting environment
  • Understand how data models, ETL pipelines, and reporting systems work
  • Comfortable discussing project timelines with stakeholders and schema design with developers
  • Thrive in fast-moving environments with evolving definitions and ambiguity
  • Excellent communication skills in English and Spanish
  • Familiarity with tools like Snowflake, DBT, Airflow, DataHub, or BI platforms (Metabase, Superset, Looker, etc.) is a plus
Job Responsibility
Job Responsibility
  • Partner closely with Data Engineering Tech Leads, Product Managers, and Project Managers to scope, plan, and deliver data and reporting solutions
  • Collaborate closely with the Tech Leads to embed partner needs into the technical roadmap and delivery priorities
  • Meet with partners to discuss data-related projects, gather requirements, and present progress or deliverables
  • Collaborate with Business Development, Project Management, and Growth teams to align requirements and timelines
  • Prepare and deliver presentations and demos to partners showcasing Fever’s data products and capabilities
  • Develop a deep understanding of Fever’s data model, business processes, partner-facing dashboards, data products, APIs
  • Translate business questions into precise technical specifications for engineers and data teams
  • Monitor and manage project timelines, milestones, and dependencies
  • Identify and propose technologies or process improvements
  • Contribute actively to the execution of the Data Engineering vision
What we offer
What we offer
  • OSDE 410 as medical insurance
  • English Lessons
  • WellHub Membership
  • 40% discount on all Fever events and experiences
  • Home office friendly anywhere in Argentina
  • Opportunity to have a real impact
  • Responsibility from day one and professional and personal growth
  • Great work environment with a young, international team
  • Attractive compensation package consisting of base salary and the potential to earn a significant bonus for top performance (including Base, Variable, and Stock Options)
  • Fulltime
Read More
Arrow Right