CrawlJobs Logo

Databricks Engineer

wintrio.com Logo

WINTrio

Location Icon

Location:
United States, Leesburg

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We are seeking a Databricks Engineer to design, build, and operate a Data & AI platform with a strong foundation in the Medallion Architecture (raw/bronze, curated/silver, and mart/gold layers). This platform will orchestrate complex data workflows and scalable ELT pipelines to integrate data from enterprise systems such as PeopleSoft, D2L, and Salesforce, delivering high-quality, governed data for machine learning, AI/BI, and analytics at scale. You will play a critical role in engineering the infrastructure and workflows that enable seamless data flow across the enterprise, ensure operational excellence, and provide the backbone for strategic decision-making, predictive modeling, and innovation.

Job Responsibility:

  • Data & AI Platform Engineering (Databricks-Centric): Design, implement, and optimize end-to-end data pipelines on Databricks, following the Medallion Architecture principles
  • Build robust and scalable ETL/ELT pipelines using Apache Spark and Delta Lake to transform raw (bronze) data into trusted curated (silver) and analytics-ready (gold) data layers
  • Operationalize Databricks Workflows for orchestration, dependency management, and pipeline automation
  • Apply schema evolution and data versioning to support agile data development
  • Platform Integration & Data Ingestion: Connect and ingest data from enterprise systems such as PeopleSoft, D2L, and Salesforce using APIs, JDBC, or other integration frameworks
  • Implement connectors and ingestion frameworks that accommodate structured, semi-structured, and unstructured data
  • Design standardized data ingestion processes with automated error handling, retries, and alerting
  • Data Quality, Monitoring, and Governance: Develop data quality checks, validation rules, and anomaly detection mechanisms to ensure data integrity across all layers
  • Integrate monitoring and observability tools (e.g., Databricks metrics, Grafana) to track ETL performance, latency, and failures
  • Implement Unity Catalog or equivalent tools for centralized metadata management, data lineage, and governance policy enforcement
  • Security, Privacy, and Compliance: Enforce data security best practices including row-level security, encryption at rest/in transit, and fine-grained access control via Unity Catalog
  • Design and implement data masking, tokenization, and anonymization for compliance with privacy regulations (e.g., GDPR, FERPA)
  • Work with security teams to audit and certify compliance controls
  • AI/ML-Ready Data Foundation: Enable data scientists by delivering high-quality, feature-rich data sets for model training and inference
  • Support AIOps/MLOps lifecycle workflows using MLflow for experiment tracking, model registry, and deployment within Databricks
  • Collaborate with AI/ML teams to create reusable feature stores and training pipelines
  • Cloud Data Architecture and Storage: Architect and manage data lakes on Azure Data Lake Storage (ADLS) or Amazon S3, and design ingestion pipelines to feed the bronze layer
  • Build data marts and warehousing solutions using platforms like Databricks
  • Optimize data storage and access patterns for performance and cost-efficiency
  • Documentation & Enablement: Maintain technical documentation, architecture diagrams, data dictionaries, and runbooks for all pipelines and components
  • Provide training and enablement sessions to internal stakeholders on the Databricks platform, Medallion Architecture, and data governance practices
  • Conduct code reviews and promote reusable patterns and frameworks across teams
  • Reporting and Accountability: Submit a weekly schedule of hours worked and progress reports outlining completed tasks, upcoming plans, and blockers
  • Track deliverables against roadmap milestones and communicate risks or dependencies

Requirements:

  • Hands-on experience with Databricks, Delta Lake, and Apache Spark
  • Deep understanding of ELT pipeline development, orchestration, and monitoring in cloud-native environments
  • Experience implementing Medallion Architecture (Bronze/Silver/Gold) and working with data versioning and schema enforcement in enterprise grade environments
  • Strong proficiency in SQL, Python, or Scala for data transformations and workflow logic
  • Proven experience integrating enterprise platforms (e.g., PeopleSoft, Salesforce, D2L) into centralized data platforms
  • Familiarity with data governance, lineage tracking, and metadata management tools

Nice to have:

  • Experience with Databricks Unity Catalog for metadata management and access control
  • Experience deploying ML models at scale using MLFlow or similar MLOps tools
  • Familiarity with cloud platforms like Azure or AWS, including storage, security, and networking aspects
  • Knowledge of data warehouse design and star/snowflake schema modeling

Additional Information:

Job Posted:
December 13, 2025

Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Databricks Engineer

New

Databricks Engineer

Our client is revolutionizing the field of cell therapy manufacturing by develop...
Location
Location
Salary
Salary:
Not provided
coherentsolutions.com Logo
Coherent Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in Data Engineering with strong technical expertise
  • Proven hands-on experience with the Databricks Data Platform and Delta Lake
  • Experience building and managing Databricks Lakehouse solutions
  • Knowledge of Delta Live Tables or similar frameworks for real-time data ingestion is a strong plus
  • Ability to define processes from scratch and establish development workflows in a new or evolving team
  • Familiarity with data testing best practices and collaboration with QA teams to ensure data quality
  • Strong problem-solving mindset, initiative, and readiness to work in a dynamic, evolving environment
  • Ability to work with a time shift, ensuring overlap with the client until approximately 10:30 AM Pacific Time for meetings and collaboration
  • English level: Upper-Intermediate (written and spoken)
Job Responsibility
Job Responsibility
  • Design, build, and maintain data pipelines using Databricks and Delta Live Tables for real-time and batch data processing
  • Collaborate with cross-functional teams to ensure smooth data flow from diverse log-based sources
  • Participate in both individual and collaborative work, ensuring scalability, reliability, and performance of data solutions
  • Define and implement best practices for data development and deployment processes on the Databricks platform
  • Proactively address technical challenges in a project environment, proposing and implementing effective solutions
What we offer
What we offer
  • Technical and non-technical training for professional and personal growth
  • Internal conferences and meetups to learn from industry experts
  • Support and mentorship from an experienced employee to help you professional grow and development
  • Internal startup incubator
  • Health insurance
  • English courses
  • Sports activities to promote a healthy lifestyle
  • Flexible work options, including remote and hybrid opportunities
  • Referral program for bringing in new talent
  • Work anniversary program and additional vacation days
Read More
Arrow Right

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • In-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • Expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • Advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • Advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • Solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • Expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake to ensure data quality, consistency, and historical tracking
  • Efficient implementation of the Lakehouse architecture on Databricks, combining best practices from DWH and Data Lake
  • Optimize Databricks clusters, Spark operations, and Delta tables to reduce latency and computational costs
  • Design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables
  • Implement and manage Unity Catalog for centralized data governance, data security and data lineage
  • Define and implement data quality standards and rules to maintain data integrity
  • Develop and manage complex workflows using Databricks Workflows or external tools to automate pipelines
  • Integrate Databricks pipelines into CI/CD processes
  • Work closely with Data Scientists, Analysts, and Architects to understand business requirements and deliver optimal technical solutions
What we offer
What we offer
  • Full access to foreign language learning platform
  • Personalized access to tech learning platforms
  • Tailored workshops and trainings to sustain your growth
  • Medical insurance
  • Meal tickets
  • Monthly budget to allocate on flexible benefit platform
  • Access to 7 Card services
  • Wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Senior Databricks Engineer

Senior Databricks Engineer - Banbury Hybrid - Salary £65-75K + Benefits. Bibby F...
Location
Location
United Kingdom , Banbury
Salary
Salary:
65000.00 - 75000.00 GBP / Year
bibbyfinancialservices.com Logo
Bibby Financial Services
Expiration Date
January 09, 2026
Flip Icon
Requirements
Requirements
  • Significant years of Databricks experience, including Unity Catalog
  • Terraform, defining, deploying, and managing cloud infrastructure as code
  • Proficiency in programming languages such as Python, Spark, SQL
  • Strong experience with SQL databases
  • Expertise in data pipeline and workflow management tools (e.g., Apache Airflow, ADF)
  • Experience with cloud platforms (Azure preferred) and related data services
  • Excellent problem-solving skills and attention to detail
  • Inclusive and curious, continuously seeks to build knowledge and understanding
  • Strong communication and collaboration skills
  • Experience of Waterfall and Agile delivery methodologies
Job Responsibility
Job Responsibility
  • Understand the business / product strategy and supporting goals with the purpose of ensuring data interpretation aligns
  • Provide technical leadership on how to break down initiatives into appropriately sized features, epics and stories that balance value and risk. Take a leadership role on setting standards, driving quality and consistency in solution delivery
  • Work closely with the Data Architect to collaborate on Design of our data architecture and interpret into a build plan
  • Lead the build and maintenance of scalable data pipelines and ETL processes to support data integration and analytics from a diverse range of data sources, Cloud storage, databases and APIs
  • Deliver large-scale data processing workflows (ingestion, cleansing, transformation, validation, storage) using best practice tools and techniques
  • Collaborate with the BI Product Owner, analysts, and other business stakeholders to understand data requirements and deliver solutions that meet business needs
  • Optimize and tune data processing systems for performance, reliability, and scalability
  • Implement data quality and validation processes to ensure the accuracy and integrity of data throughout the pipelines
  • Operate an agile CI/CD environment within Azure DevOps, collaborating on Sprint cycles, code deployment, version control, and development practices
  • Develop and maintain data models, schemas, and documentation
What we offer
What we offer
  • Private healthcare for you and your family
  • Company car allowance
  • Company pension scheme
  • Wide range of flexible benefits, such as gym membership, technology, or health assessments
  • Access to an online wellbeing centre
  • Range of discounts from many businesses
  • 25 days holiday, which increases with service, and options to buy or sell more
  • Electric Vehicle/Plug-in Hybrid Vehicle (EV/PHEV) scheme
  • Fulltime
Read More
Arrow Right

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • in-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking
  • bachelor’s degree in Computer Science, Engineering, Mathematics, or a relevant technical field
  • minimum of 5+ years of experience in Data Engineering, with at least 3+ years of experience working with Databricks and Spark at scale.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake
  • design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables (DLT)
  • implement Unity Catalog for centralized data governance, fine-grained security (row/column-level security), and data lineage
  • develop and manage complex workflows using Databricks Workflows (Jobs) or external tools (Azure Data Factory, Airflow) to automate pipelines
  • integrate Databricks pipelines into CI/CD processes using tools like Git, Databricks Repos, and Bundles
  • work closely with Data Scientists, Analysts, and Architects to deliver optimal technical solutions
  • provide technical guidance and mentorship to junior developers.
What we offer
What we offer
  • Full access to foreign language learning platform
  • personalized access to tech learning platforms
  • tailored workshops and trainings to sustain your growth
  • medical insurance
  • meal tickets
  • monthly budget to allocate on flexible benefit platform
  • access to 7 Card services
  • wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right
New

Databricks Platform Engineer

Are you excited about building a world-class data platform and working with clou...
Location
Location
Finland , Helsinki
Salary
Salary:
Not provided
supercell.com Logo
Supercell
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in designing, developing and maintaining large-scale data platform in a complex enterprise environment
  • In-depth experience with Databricks infrastructure and services
  • Extensive Infrastructure as Code experience (preferably Terraform)
  • Software development experience (preferably Java or Python)
  • Strong collaboration and communication skills
  • Ability to innovate and work independently
Job Responsibility
Job Responsibility
  • Own and improve the Databricks infrastructure for data collection, storage and processing
  • Implement and manage flexible access controls that don’t compromise user speed and efficiency
  • Proactively suggest and implement improvements that increase scalability, robustness and availability of data systems
  • Stay up to date with new products and services released by Databricks, experiment and help make them part of Supercell’s data platform
  • Participate in 24/7 on-call to maintain batch and real-time data infrastructure
  • Contribute to common data tooling to enhance engineering productivity
  • Together with the rest of the team, develop vision and strategy for the data platform
What we offer
What we offer
  • Relocation support for you and your family (including pets)
  • Compensation and benefits structured to help you enjoy your time
  • Work environment and resources to succeed while having fun
  • Fulltime
Read More
Arrow Right
New

Backend Data Engineer

The mission of the Data & Analytics (D&A) team is to enable data users to easily...
Location
Location
United States , Cincinnati
Salary
Salary:
Not provided
honorvettech.com Logo
HonorVet Technologies
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong proficiency in Databricks (SQL, PySpark, Delta Lake, Jobs/Workflows)
  • Deep knowledge of Unity Catalog administration and APIs
  • Expertise in Python for automation scripts, API integrations, and data quality checks
  • Experience with governance frameworks (access control, tagging enforcement, lineage, compliance)
  • Solid foundation in security & compliance best practices (IAM, encryption, PII)
  • Experience with CI/CD and deployment pipelines (GitHub Actions, Azure DevOps, Jenkins)
  • Familiarity with monitoring/observability tools and building custom logging & alerting pipelines
  • Experience integrating with external systems (ServiceNow, monitoring platforms)
  • Experience with modern data quality frameworks (Great Expectations, Deequ, or equivalent)
  • Strong problem-solving and debugging skills in distributed systems
Job Responsibility
Job Responsibility
  • Databricks & Unity Catalog Engineering: Build and maintain backend services leveraging Databricks (SQL, PySpark, Delta Lake, Jobs/Workflows)
  • Administer Unity Catalog including metadata, permissions, lineage, and tags
  • Integrate Unity Catalog APIs to surface data into the Metadata Catalog UI
  • Governance Automation: Develop automation scripts and pipelines to enforce access controls, tagging, and role-based policies
  • Implement governance workflows integrating with tools such as ServiceNow for request and approval processes
  • Automate compliance checks for regulatory and security requirements (IAM, PII handling, encryption)
  • Data Quality & Observability: Implement data quality frameworks (Great Expectations, Deequ, or equivalent) to validate datasets
  • Build monitoring and observability pipelines for logging, usage metrics, audit trails, and alerts
  • Ensure high system reliability and proactive issue detection
  • API Development & Integration: Design and implement APIs to integrate Databricks services with external platforms (ServiceNow, monitoring tools)
Read More
Arrow Right

Senior Data Platform Engineering Manager

We are seeking an experienced Senior Data Platform Engineering Manager to lead a...
Location
Location
United States , Work at, Illinois
Salary
Salary:
130295.00 - 260590.00 USD / Year
https://www.cvshealth.com/ Logo
CVS Health
Expiration Date
January 19, 2026
Flip Icon
Requirements
Requirements
  • 8+ years of experience in data engineering, platform management, or a related role, with a strong focus on cloud data technologies
  • Proven experience managing and leading technical teams
  • Deep, hands-on experience with the Azure Databricks platform, including its administration, cluster management, job orchestration, and key features like Delta Lake and Unity Catalog
  • Relevant certifications such as Microsoft Certified: Azure Solutions Architect Expert, Databricks Certified Data Engineer Professional, and Databricks Certified Platform Administrator
  • A strong understanding of distributed computing principles and the architecture of large-scale data systems
  • Exceptional analytical and problem-solving skills with a track record of diagnosing and resolving complex, platform-level issues
  • Excellent communication, documentation, and presentation skills, with the ability to influence technical and business stakeholders
  • Proficiency in Terraform, Python, Apache Spark/PySpark, SQL, and Shell scripting
  • Leading the efforts of 7 platform engineers
  • Preferred: Familiarity with healthcare data and healthcare insurance feeds, Data Analytics, and ML/AI
Job Responsibility
Job Responsibility
  • Own the strategic vision and roadmap for the Azure Databricks platform, aligning it with company-wide data and business goals
  • Lead, mentor, and build a high-performing team of data engineers and platform specialists
  • Act as the primary point of contact for platform-related issues and opportunities, communicating effectively with executive leadership and technical teams
  • Ensure new data engineering team members are onboarded consistently with best practices
  • Advocate for best practices, create reusable patterns and libraries, and provide expert-level technical support and training to data engineering teams
  • Ensure the platform's operational health, including performance, security, and cost management
  • Oversee platform maintenance, upgrades, and migrations to ensure reliability and minimize downtime
  • Establish and enforce best practices for development, data governance, quality, and security on the platform
  • Drive the continuous enhancement of the platform by introducing new tools, features, and capabilities
  • Evaluate and integrate new data technologies to improve efficiency, scalability, and performance
What we offer
What we offer
  • Affordable medical plan options, a 401(k) plan (including matching company contributions), and an employee stock purchase plan
  • No-cost programs for wellness screenings, tobacco cessation, and weight management programs
  • Confidential counseling and financial coaching
  • Paid time off, flexible work schedules, family leave, dependent care resources, colleague assistance programs, tuition assistance, and retiree medical access
  • Award target in the company’s equity award program
  • Fulltime
Read More
Arrow Right
New

Manager, Engineering

Join Enveda as a Manager, Engineering in Hyderabad and help us transform natural...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
enveda.com Logo
Enveda
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong background in software engineering with experience in Vue, Python, Databricks, and Azure
  • Product mindset with a focus on user needs and scalable solutions
  • Leadership skills to mentor engineers and foster a culture of collaboration and improvement
Job Responsibility
Job Responsibility
  • Architect and deliver systems that automate scientific workflows
  • Build and maintain solutions using Vue, Python, Databricks, and Azure
  • Design integrations between core scientific platforms for data flow efficiency
What we offer
What we offer
  • Culture
  • Medical
  • Block Leaves
  • Work-Life Harmony
  • Fulltime
Read More
Arrow Right
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.