CrawlJobs Logo

Databricks Lead Engineer

harringtonstarr.com Logo

Harrington Starr

Location Icon

Location:
United Kingdom , London

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

An exciting opportunity has arisen for an experienced Databricks Lead Engineer to join a cutting-edge data engineering programme within a fast-growing, data-driven environment. This is a hands-on role focused on leading a large-scale re-engineering initiative, transforming complex data architecture into a modern, scalable platform.

Job Responsibility:

  • Designing and implementing a scalable Delta Lake architecture
  • Modelling and partitioning high-cardinality datasets (e.g. order book data)
  • Implementing metadata management, compaction, and versioning strategies
  • Supporting migration from legacy data platforms to Delta Tables and Iceberg-compatible structures
  • Designing systems to support multiple downstream data delivery use cases
  • Establishing robust backup and recovery strategies (targeting 1-day RTO)
  • Acting as a technical lead while remaining hands-on in engineering delivery

Requirements:

  • Strong commercial experience with Databricks
  • Expertise in Delta Lake, Delta Tables, and Unity Catalog
  • Experience with Delta UniForm (Universal Format)
  • Hands-on experience with Apache Iceberg
  • Strong AWS experience (including S3 and Lake Formation)
  • Linux-based development experience
  • Familiarity with modern engineering practices (CI/CD, testing, version control)
  • Ability to work autonomously and drive delivery
  • Strong problem-solving and communication skills

Nice to have:

  • Experience with Snowflake
  • Experience working with petabyte-scale datasets
  • Proficiency in Python
  • Exposure to financial or market data environments
What we offer:
  • Work on a high-impact, large-scale data transformation project
  • Opportunity to shape architecture decisions in a complex data environment
  • Hybrid working with a Central London base
  • Competitive day rate outside IR35

Additional Information:

Job Posted:
April 11, 2026

Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Databricks Lead Engineer

Lead Data Engineer

As a Lead Data Engineer at Rearc, you'll play a pivotal role in establishing and...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
rearc.io Logo
Rearc
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10+ years of experience in data engineering, data architecture, or related fields
  • Extensive experience in writing and testing Java and/or Python
  • Proven experience with data pipeline orchestration using platforms such as Airflow, Databricks, DBT or AWS Glue
  • Hands-on experience with data analysis tools and libraries like Pyspark, NumPy, Pandas, or Dask
  • Proficiency with Spark and Databricks is highly desirable
  • Proven track record of leading complex data engineering projects, including designing and implementing scalable data solutions
  • Hands-on experience with ETL processes, data warehousing, and data modeling tools
  • In-depth knowledge of data integration tools and best practices
  • Strong understanding of cloud-based data services and technologies (e.g., AWS Redshift, Azure Synapse Analytics, Google BigQuery)
  • Strong strategic and analytical skills
Job Responsibility
Job Responsibility
  • Understand Requirements and Challenges: Collaborate with stakeholders to deeply understand their data requirements and challenges
  • Implement with a DataOps Mindset: Embrace a DataOps mindset and utilize modern data engineering tools and frameworks, such as Apache Airflow, Apache Spark, or similar, to build scalable and efficient data pipelines and architectures
  • Lead Data Engineering Projects: Take the lead in managing and executing data engineering projects, providing technical guidance and oversight to ensure successful project delivery
  • Mentor Data Engineers: Share your extensive knowledge and experience in data engineering with junior team members, guiding and mentoring them to foster their growth and development in the field
  • Promote Knowledge Sharing: Contribute to our knowledge base by writing technical blogs and articles, promoting best practices in data engineering, and contributing to a culture of continuous learning and innovation
Read More
Arrow Right

Lead Data Engineer

As a Lead Data Engineer at Rearc, you'll play a pivotal role in establishing and...
Location
Location
United States
Salary
Salary:
Not provided
rearc.io Logo
Rearc
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10+ years of experience in data engineering, data architecture, or related technical fields
  • Proven ability to design, build, and optimize large-scale data ecosystems
  • Strong track record of leading complex data engineering initiatives
  • Deep hands-on expertise in ETL/ELT design, data warehousing, and data modeling
  • Extensive experience with data integration frameworks and best practices
  • Advanced knowledge of cloud-based data services and architectures (AWS Redshift, Azure Synapse Analytics, Google BigQuery, or equivalent)
  • Strong strategic and analytical thinking
  • Proficiency with modern data engineering frameworks (Databricks, Spark, lakehouse technologies like Delta Lake)
  • Exceptional communication and interpersonal skills
Job Responsibility
Job Responsibility
  • Engage deeply with stakeholders to understand data needs, business challenges, and technical constraints
  • Translate stakeholder needs into scalable, high-quality data solutions
  • Implement with a DataOps mindset using tools like Apache Airflow, Databricks/Spark, Kafka
  • Build reliable, automated, and efficient data pipelines and architectures
  • Lead and execute complex projects
  • Provide technical direction and set engineering standards
  • Ensure alignment with customer goals and company principles
  • Mentor and develop data engineers
  • Promote knowledge sharing and thought leadership
  • Contribute to internal and external content
What we offer
What we offer
  • Comprehensive health benefits
  • Generous time away and flexible PTO
  • Maternity and paternity leave
  • Access to educational resources with reimbursement for continued learning
  • 401(k) plan with company contribution
Read More
Arrow Right

Lead Data Engineer

Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader i...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or master’s degree in computer science, Engineering, or related field
  • 7-9 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark
  • Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse
  • Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices
  • Solid grasp of data governance, metadata tagging, and role-based access control
  • Proven ability to mentor and grow engineers in a matrixed or global environment
  • Strong verbal and written communication skills, with the ability to operate cross-functionally
  • Certifications in Azure, Databricks, or Snowflake are a plus
  • Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management)
  • Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools
Job Responsibility
Job Responsibility
  • Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms
  • Lead the technical execution of non-domain specific initiatives (e.g. reusable dimensions, TLOG standardization, enablement pipelines)
  • Architect data models and re-usable layers consumed by multiple downstream pods
  • Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks
  • Mentoring and coaching team
  • Partner with product and platform leaders to ensure engineering consistency and delivery excellence
  • Act as an L3 escalation point for operational data issues impacting foundational pipelines
  • Own engineering best practices, sprint planning, and quality across the Enablement pod
  • Contribute to platform discussions and architectural decisions across regions
  • Fulltime
Read More
Arrow Right

Lead Data Engineer

Lead Data Engineer to serve as both a technical leader and people coach for our ...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or master’s degree in computer science, Engineering, or related field
  • 8-10 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark
  • Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse
  • Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices
  • Solid grasp of data governance, metadata tagging, and role-based access control
  • Proven ability to mentor and grow engineers in a matrixed or global environment
  • Strong verbal and written communication skills, with the ability to operate cross-functionally
  • Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management)
  • Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools
  • Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance)
Job Responsibility
Job Responsibility
  • Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms
  • Lead the technical execution of non-domain specific initiatives (e.g. reusable dimensions, TLOG standardization, enablement pipelines)
  • Architect data models and re-usable layers consumed by multiple downstream pods
  • Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks
  • Mentoring and coaching team
  • Partner with product and platform leaders to ensure engineering consistency and delivery excellence
  • Act as an L3 escalation point for operational data issues impacting foundational pipelines
  • Own engineering best practices, sprint planning, and quality across the Enablement pod
  • Contribute to platform discussions and architectural decisions across regions
  • Fulltime
Read More
Arrow Right

Lead Data Engineer

As a Lead Data Engineer or architect at Made Tech, you'll play a pivotal role in...
Location
Location
United Kingdom , Any UK Office Hub (Bristol / London / Manchester / Swansea)
Salary
Salary:
80000.00 - 96000.00 GBP / Year
madetech.com Logo
Made Tech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proficiency in Git (inc. Github Actions) and able to explain the benefits of different branch strategies
  • Strong experience in IaC and able to guide how one could deploy infrastructure into different environments
  • Knowledge of handling and transforming various data types (JSON, CSV, etc) with Apache Spark, Databricks or Hadoop
  • Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes)
  • Ability to create data pipelines on a cloud environment and integrate error handling within these pipelines
  • You understand how to create reusable libraries to encourage uniformity or approach across multiple data pipelines
  • Able to document and present end-to-end diagrams to explain a data processing system on a cloud environment
  • Some knowledge of how you would present diagrams (C4, UML, etc.)
  • Enthusiasm for learning and self-development
  • You have experience of working on agile delivery-lead projects and can apply agile practices such as Scrum, XP, Kanban
Job Responsibility
Job Responsibility
  • Define, shape and perfect data strategies in central and local government
  • Help public sector teams understand the value of their data, and make the most of it
  • Establish yourself as a trusted advisor in data driven approaches using public cloud services like AWS, Azure and GCP
  • Contribute to our recruitment efforts and take on line management responsibilities
  • Help implement efficient data pipelines & storage
What we offer
What we offer
  • 30 days of paid annual leave
  • Flexible parental leave options
  • Part time remote working for all our staff
  • Paid counselling as well as financial and legal advice
  • 7% employer matched pension
  • Flexible benefit platform which includes a Smart Tech scheme, Cycle to work scheme, and an individual benefits allowance which you can invest in a Health care cash plan or Pension plan
  • Optional social and wellbeing calendar of events
  • Fulltime
Read More
Arrow Right

Senior/Architect Data Engineer

We are seeking a highly skilled and experienced Senior/Architect Data Engineer t...
Location
Location
Poland , Warsaw; Poznań; Lublin; Katowice; Rzeszów
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience architecting solutions on the Databricks Lakehouse using Unity Catalog, Delta Lake, MLflow, Model Serving, Feature Store, AutoML, and Databricks Workflows
  • Expertise in real-time/low latency model serving architectures with auto-scaling, confidence-based routing, and A/B testing
  • Strong knowledge of cloud security and governance on Azure or AWS, including Azure AD/AWS IAM, encryption, audit trails, and compliance frameworks
  • Hands-on MLOps skills across experiment tracking, model registry/versioning, drift monitoring, automated retraining, and production rollout strategies
  • Proficiency in Python and Databricks native tooling, with practical integration of REST APIs/SDKs and Databricks SQL in analytics products
  • Familiarity with React dashboards and human-in-the-loop operational workflows for ML and data quality validation
  • Demonstrated ability to optimize performance, reliability, and cost for large-scale analytics/ML platforms with strong observability
  • Experience leading multi-phase implementations with clear success metrics, risk management, documentation, and training/change management
  • Domain knowledge in telemetry, time series, or industrial data (aerospace a plus) and prior work with agentic patterns on Mosaic AI
  • Databricks certifications and experience in enterprise deployments of the platform are preferred
Job Responsibility
Job Responsibility
  • Lead the design and implementation of a Databricks-centric multi-agent processing engine
  • Design governed data ingestion, storage, and real-time processing workflows using Delta Lake, Structured Streaming, and Databricks Workflows
  • Own the model lifecycle with MLflow, including experiment tracking, registry/versioning, A/B testing, drift monitoring, and automated retraining pipelines
  • Architect low latency model serving endpoints with auto-scaling and confidence-based routing for sub-second agent decisioning
  • Establish robust data governance practices with Unity Catalog, including access control, audit trails, data quality, and compliance
  • Drive performance and cost optimization strategies, including auto-scaling, spot usage, and observability dashboards
  • Define production release strategies (blue-green), monitoring and alerting mechanisms, operational runbooks, and Service Level Objectives (SLOs)
  • Partner with engineering, MLOps, and product teams to deliver human-in-the-loop workflows and dashboards
  • Lead change management, training, and knowledge transfer while managing a parallel shadow processing path
  • Plan and coordinate phased delivery, success metrics, and risk mitigation
What we offer
What we offer
  • Flexible working hours
  • Hybrid work model
  • Cafeteria system
  • Generous referral bonuses (up to PLN6,000)
  • Additional revenue sharing opportunities
  • Ongoing guidance from dedicated Team Manager
  • Tailored technical mentoring from assigned technical leader
  • Dedicated team-building budget for online and on-site team events
  • Opportunities to participate in charitable initiatives and local sports programs
  • Supportive and inclusive work culture
  • Fulltime
Read More
Arrow Right

Data Engineer

We are looking for an experienced Data Engineer with deep expertise in Databrick...
Location
Location
Salary
Salary:
Not provided
coherentsolutions.com Logo
Coherent Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field
  • 5+ years of experience in data engineering, with at least 2 years of hands-on experience with Databricks (including Spark, Delta Lake, and MLflow)
  • Strong proficiency in Python and/or Scala for data processing
  • Deep understanding of distributed data processing, data warehousing, and ETL concepts
  • Experience with cloud data platforms (Azure Data Lake, AWS S3, or Google Cloud Storage)
  • Solid knowledge of SQL and experience with large-scale relational and NoSQL databases
  • Familiarity with CI/CD, DevOps, and infrastructure-as-code practices for data engineering
  • Experience with data governance, security, and compliance in cloud environments
  • Excellent problem-solving, communication, and leadership skills
  • English: Upper Intermediate level or higher
Job Responsibility
Job Responsibility
  • Lead the design, development, and deployment of scalable data pipelines and ETL processes using Databricks (Spark, Delta Lake, MLflow)
  • Architect and implement data lakehouse solutions, ensuring data quality, governance, and security
  • Optimize data workflows for performance and cost efficiency on Databricks and cloud platforms (Azure, AWS, or GCP)
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights
  • Mentor and guide junior engineers, promoting best practices in data engineering and Databricks usage
  • Develop and maintain documentation, data models, and technical standards
  • Monitor, troubleshoot, and resolve issues in production data pipelines and environments
  • Stay current with emerging trends and technologies in data engineering and Databricks ecosystem
What we offer
What we offer
  • Technical and non-technical training for professional and personal growth
  • Internal conferences and meetups to learn from industry experts
  • Support and mentorship from an experienced employee to help you professional grow and development
  • Internal startup incubator
  • Health insurance
  • English courses
  • Sports activities to promote a healthy lifestyle
  • Flexible work options, including remote and hybrid opportunities
  • Referral program for bringing in new talent
  • Work anniversary program and additional vacation days
Read More
Arrow Right

Principal Data Engineer

We are seeking an experienced Principal Data Engineer to define, lead, and scale...
Location
Location
United Kingdom , Thame; Leeds
Salary
Salary:
80000.00 - 100000.00 GBP / Year
pexa.co.uk Logo
PEXA UK
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Broad experience as a Data Engineer including technical leadership
  • Broad cloud experience, ideally both Azure and AWS
  • Deep expertise in PySpark and distributed data processing at scale
  • Extensive experience designing and optimising in Databricks
  • Advanced SQL optimisation and schema design for analytical workloads
  • Strong understanding of data security, privacy, and GDPR/PII compliance
  • Experience implementing and leading data governance frameworks
  • Proven experience leading the design and operation of a complex data platform
  • Track record of mentoring engineers and raising technical standards
  • Ability to influence senior stakeholders and align data initiatives with wider business goals
Job Responsibility
Job Responsibility
  • Design and oversee scalable, performant, and secure architectures on Databricks and distributed systems
  • Anticipate scaling challenges and ensure platforms are future-proof
  • Lead the design and development of robust, high-performance data pipelines using PySpark and Databricks
  • Define and ensure testing frameworks for data workflows
  • Ensure end-to-end data quality from raw ingestion to curated, trusted datasets powering analytics
  • Establish and enforce best practices for data governance, lineage, metadata, and security controls
  • Ensure compliance with GDPR and other regulatory frameworks
  • Act as a technical authority and mentor, guiding data engineers
  • Influence cross-functional teams to align on data strategy, standards, and practices
  • Partner with product, engineering, and business leaders to prioritise and deliver high-impact data initiatives
  • Fulltime
Read More
Arrow Right