CrawlJobs Logo

Databricks Engineer

coherentsolutions.com Logo

Coherent Solutions

Location Icon

Location:

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Our client is revolutionizing the field of cell therapy manufacturing by developing an advanced, scalable, and cost-effective solution that improves accessibility for patients in need. With a mission-driven culture and a multidisciplinary team, they are dedicated to accelerating life-saving therapies using cutting-edge technologies and innovation. Our client develops a unique device - a “factory-in-a-box” designed for industrial-scale cell therapy manufacturing. The accompanying web application enables users to define customized cell therapy workflows with complete control over process parameters, scale manufacturing execution and process monitoring, and seamlessly track in-process data while automatically generating comprehensive electronic batch records.

Job Responsibility:

  • Design, build, and maintain data pipelines using Databricks and Delta Live Tables for real-time and batch data processing
  • Collaborate with cross-functional teams to ensure smooth data flow from diverse log-based sources
  • Participate in both individual and collaborative work, ensuring scalability, reliability, and performance of data solutions
  • Define and implement best practices for data development and deployment processes on the Databricks platform
  • Proactively address technical challenges in a project environment, proposing and implementing effective solutions

Requirements:

  • 5+ years of experience in Data Engineering with strong technical expertise
  • Proven hands-on experience with the Databricks Data Platform and Delta Lake
  • Experience building and managing Databricks Lakehouse solutions
  • Knowledge of Delta Live Tables or similar frameworks for real-time data ingestion is a strong plus
  • Ability to define processes from scratch and establish development workflows in a new or evolving team
  • Familiarity with data testing best practices and collaboration with QA teams to ensure data quality
  • Strong problem-solving mindset, initiative, and readiness to work in a dynamic, evolving environment
  • Ability to work with a time shift, ensuring overlap with the client until approximately 10:30 AM Pacific Time for meetings and collaboration
  • English level: Upper-Intermediate (written and spoken)
What we offer:
  • Technical and non-technical training for professional and personal growth
  • Internal conferences and meetups to learn from industry experts
  • Support and mentorship from an experienced employee to help you professional grow and development
  • Internal startup incubator
  • Health insurance
  • English courses
  • Sports activities to promote a healthy lifestyle
  • Flexible work options, including remote and hybrid opportunities
  • Referral program for bringing in new talent
  • Work anniversary program and additional vacation days

Additional Information:

Job Posted:
December 07, 2025

Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Databricks Engineer

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • In-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • Expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • Advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • Advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • Solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • Expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake to ensure data quality, consistency, and historical tracking
  • Efficient implementation of the Lakehouse architecture on Databricks, combining best practices from DWH and Data Lake
  • Optimize Databricks clusters, Spark operations, and Delta tables to reduce latency and computational costs
  • Design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables
  • Implement and manage Unity Catalog for centralized data governance, data security and data lineage
  • Define and implement data quality standards and rules to maintain data integrity
  • Develop and manage complex workflows using Databricks Workflows or external tools to automate pipelines
  • Integrate Databricks pipelines into CI/CD processes
  • Work closely with Data Scientists, Analysts, and Architects to understand business requirements and deliver optimal technical solutions
What we offer
What we offer
  • Full access to foreign language learning platform
  • Personalized access to tech learning platforms
  • Tailored workshops and trainings to sustain your growth
  • Medical insurance
  • Meal tickets
  • Monthly budget to allocate on flexible benefit platform
  • Access to 7 Card services
  • Wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Databricks Engineer

We are seeking a Databricks Engineer to design, build, and operate a Data & AI p...
Location
Location
United States , Leesburg
Salary
Salary:
Not provided
wintrio.com Logo
WINTrio
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Hands-on experience with Databricks, Delta Lake, and Apache Spark
  • Deep understanding of ELT pipeline development, orchestration, and monitoring in cloud-native environments
  • Experience implementing Medallion Architecture (Bronze/Silver/Gold) and working with data versioning and schema enforcement in enterprise grade environments
  • Strong proficiency in SQL, Python, or Scala for data transformations and workflow logic
  • Proven experience integrating enterprise platforms (e.g., PeopleSoft, Salesforce, D2L) into centralized data platforms
  • Familiarity with data governance, lineage tracking, and metadata management tools
Job Responsibility
Job Responsibility
  • Data & AI Platform Engineering (Databricks-Centric): Design, implement, and optimize end-to-end data pipelines on Databricks, following the Medallion Architecture principles
  • Build robust and scalable ETL/ELT pipelines using Apache Spark and Delta Lake to transform raw (bronze) data into trusted curated (silver) and analytics-ready (gold) data layers
  • Operationalize Databricks Workflows for orchestration, dependency management, and pipeline automation
  • Apply schema evolution and data versioning to support agile data development
  • Platform Integration & Data Ingestion: Connect and ingest data from enterprise systems such as PeopleSoft, D2L, and Salesforce using APIs, JDBC, or other integration frameworks
  • Implement connectors and ingestion frameworks that accommodate structured, semi-structured, and unstructured data
  • Design standardized data ingestion processes with automated error handling, retries, and alerting
  • Data Quality, Monitoring, and Governance: Develop data quality checks, validation rules, and anomaly detection mechanisms to ensure data integrity across all layers
  • Integrate monitoring and observability tools (e.g., Databricks metrics, Grafana) to track ETL performance, latency, and failures
  • Implement Unity Catalog or equivalent tools for centralized metadata management, data lineage, and governance policy enforcement
Read More
Arrow Right

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • in-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking
  • bachelor’s degree in Computer Science, Engineering, Mathematics, or a relevant technical field
  • minimum of 5+ years of experience in Data Engineering, with at least 3+ years of experience working with Databricks and Spark at scale.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake
  • design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables (DLT)
  • implement Unity Catalog for centralized data governance, fine-grained security (row/column-level security), and data lineage
  • develop and manage complex workflows using Databricks Workflows (Jobs) or external tools (Azure Data Factory, Airflow) to automate pipelines
  • integrate Databricks pipelines into CI/CD processes using tools like Git, Databricks Repos, and Bundles
  • work closely with Data Scientists, Analysts, and Architects to deliver optimal technical solutions
  • provide technical guidance and mentorship to junior developers.
What we offer
What we offer
  • Full access to foreign language learning platform
  • personalized access to tech learning platforms
  • tailored workshops and trainings to sustain your growth
  • medical insurance
  • meal tickets
  • monthly budget to allocate on flexible benefit platform
  • access to 7 Card services
  • wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Databricks Platform Engineer

Are you excited about building a world-class data platform and working with clou...
Location
Location
Finland , Helsinki
Salary
Salary:
Not provided
supercell.com Logo
Supercell
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in designing, developing and maintaining large-scale data platform in a complex enterprise environment
  • In-depth experience with Databricks infrastructure and services
  • Extensive Infrastructure as Code experience (preferably Terraform)
  • Software development experience (preferably Java or Python)
  • Strong collaboration and communication skills
  • Ability to innovate and work independently
Job Responsibility
Job Responsibility
  • Own and improve the Databricks infrastructure for data collection, storage and processing
  • Implement and manage flexible access controls that don’t compromise user speed and efficiency
  • Proactively suggest and implement improvements that increase scalability, robustness and availability of data systems
  • Stay up to date with new products and services released by Databricks, experiment and help make them part of Supercell’s data platform
  • Participate in 24/7 on-call to maintain batch and real-time data infrastructure
  • Contribute to common data tooling to enhance engineering productivity
  • Together with the rest of the team, develop vision and strategy for the data platform
What we offer
What we offer
  • Relocation support for you and your family (including pets)
  • Compensation and benefits structured to help you enjoy your time
  • Work environment and resources to succeed while having fun
  • Fulltime
Read More
Arrow Right

Backend Data Engineer

The mission of the Data & Analytics (D&A) team is to enable data users to easily...
Location
Location
United States , Cincinnati
Salary
Salary:
Not provided
honorvettech.com Logo
HonorVet Technologies
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong proficiency in Databricks (SQL, PySpark, Delta Lake, Jobs/Workflows)
  • Deep knowledge of Unity Catalog administration and APIs
  • Expertise in Python for automation scripts, API integrations, and data quality checks
  • Experience with governance frameworks (access control, tagging enforcement, lineage, compliance)
  • Solid foundation in security & compliance best practices (IAM, encryption, PII)
  • Experience with CI/CD and deployment pipelines (GitHub Actions, Azure DevOps, Jenkins)
  • Familiarity with monitoring/observability tools and building custom logging & alerting pipelines
  • Experience integrating with external systems (ServiceNow, monitoring platforms)
  • Experience with modern data quality frameworks (Great Expectations, Deequ, or equivalent)
  • Strong problem-solving and debugging skills in distributed systems
Job Responsibility
Job Responsibility
  • Databricks & Unity Catalog Engineering: Build and maintain backend services leveraging Databricks (SQL, PySpark, Delta Lake, Jobs/Workflows)
  • Administer Unity Catalog including metadata, permissions, lineage, and tags
  • Integrate Unity Catalog APIs to surface data into the Metadata Catalog UI
  • Governance Automation: Develop automation scripts and pipelines to enforce access controls, tagging, and role-based policies
  • Implement governance workflows integrating with tools such as ServiceNow for request and approval processes
  • Automate compliance checks for regulatory and security requirements (IAM, PII handling, encryption)
  • Data Quality & Observability: Implement data quality frameworks (Great Expectations, Deequ, or equivalent) to validate datasets
  • Build monitoring and observability pipelines for logging, usage metrics, audit trails, and alerts
  • Ensure high system reliability and proactive issue detection
  • API Development & Integration: Design and implement APIs to integrate Databricks services with external platforms (ServiceNow, monitoring tools)
Read More
Arrow Right

Data Analytics Engineer

SDG Group is expanding its global Data & Analytics practice and is seeking a mot...
Location
Location
Egypt , Cairo
Salary
Salary:
Not provided
sdggroup.com Logo
SDG
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in computer science, Engineering, Information Systems, or related field
  • Hands-on experience in DataOps / Data Engineering
  • Strong knowledge in Databricks OR Snowflake (one of them is mandatory)
  • Proficiency in Python and SQL
  • Experience with Azure data ecosystem (ADF, ADLS, Synapse, etc.)
  • Understanding of CI/CD practices and DevOps for data.
  • Knowledge of data modeling, orchestration frameworks, and monitoring tools
  • Strong analytical and troubleshooting skills
  • Eagerness to learn and grow in a global consulting environment
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable and reliable data pipelines following DataOps best practices
  • Work with modern cloud data stacks using Databricks (Spark, Delta Lake) or Snowflake (Snow pipe, tasks, streams)
  • Develop and optimize ETL/ELT workflows using Python, SQL, and orchestration tools
  • Work with Azure data services (ADF, ADLS, Azure SQL, Azure Functions)
  • Implement CI/CD practices using Azure DevOps or Git-based workflows
  • Ensure data quality, consistency, and governance across all delivered data solutions
  • Monitor and troubleshoot pipelines for performance and operational excellence
  • Collaborate with international teams, architects, and analytics consultants
  • Contribute to technical documentation and solution design assets
What we offer
What we offer
  • Remote working model aligned with international project needs
  • Opportunity to work on European and global engagements
  • Mentorship and growth paths within SDG Group
  • A dynamic, innovative, and collaborative environment
  • Access to world-class training and learning platforms
  • Fulltime
Read More
Arrow Right

Data Engineer

We are looking for an experienced Data Engineer with deep expertise in Databrick...
Location
Location
Salary
Salary:
Not provided
coherentsolutions.com Logo
Coherent Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field
  • 5+ years of experience in data engineering, with at least 2 years of hands-on experience with Databricks (including Spark, Delta Lake, and MLflow)
  • Strong proficiency in Python and/or Scala for data processing
  • Deep understanding of distributed data processing, data warehousing, and ETL concepts
  • Experience with cloud data platforms (Azure Data Lake, AWS S3, or Google Cloud Storage)
  • Solid knowledge of SQL and experience with large-scale relational and NoSQL databases
  • Familiarity with CI/CD, DevOps, and infrastructure-as-code practices for data engineering
  • Experience with data governance, security, and compliance in cloud environments
  • Excellent problem-solving, communication, and leadership skills
  • English: Upper Intermediate level or higher
Job Responsibility
Job Responsibility
  • Lead the design, development, and deployment of scalable data pipelines and ETL processes using Databricks (Spark, Delta Lake, MLflow)
  • Architect and implement data lakehouse solutions, ensuring data quality, governance, and security
  • Optimize data workflows for performance and cost efficiency on Databricks and cloud platforms (Azure, AWS, or GCP)
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights
  • Mentor and guide junior engineers, promoting best practices in data engineering and Databricks usage
  • Develop and maintain documentation, data models, and technical standards
  • Monitor, troubleshoot, and resolve issues in production data pipelines and environments
  • Stay current with emerging trends and technologies in data engineering and Databricks ecosystem
What we offer
What we offer
  • Technical and non-technical training for professional and personal growth
  • Internal conferences and meetups to learn from industry experts
  • Support and mentorship from an experienced employee to help you professional grow and development
  • Internal startup incubator
  • Health insurance
  • English courses
  • Sports activities to promote a healthy lifestyle
  • Flexible work options, including remote and hybrid opportunities
  • Referral program for bringing in new talent
  • Work anniversary program and additional vacation days
Read More
Arrow Right

Lead Data Engineer

As a Lead Data Engineer at Rearc, you'll play a pivotal role in establishing and...
Location
Location
United States
Salary
Salary:
Not provided
rearc.io Logo
Rearc
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10+ years of experience in data engineering, data architecture, or related technical fields
  • Proven ability to design, build, and optimize large-scale data ecosystems
  • Strong track record of leading complex data engineering initiatives
  • Deep hands-on expertise in ETL/ELT design, data warehousing, and data modeling
  • Extensive experience with data integration frameworks and best practices
  • Advanced knowledge of cloud-based data services and architectures (AWS Redshift, Azure Synapse Analytics, Google BigQuery, or equivalent)
  • Strong strategic and analytical thinking
  • Proficiency with modern data engineering frameworks (Databricks, Spark, lakehouse technologies like Delta Lake)
  • Exceptional communication and interpersonal skills
Job Responsibility
Job Responsibility
  • Engage deeply with stakeholders to understand data needs, business challenges, and technical constraints
  • Translate stakeholder needs into scalable, high-quality data solutions
  • Implement with a DataOps mindset using tools like Apache Airflow, Databricks/Spark, Kafka
  • Build reliable, automated, and efficient data pipelines and architectures
  • Lead and execute complex projects
  • Provide technical direction and set engineering standards
  • Ensure alignment with customer goals and company principles
  • Mentor and develop data engineers
  • Promote knowledge sharing and thought leadership
  • Contribute to internal and external content
What we offer
What we offer
  • Comprehensive health benefits
  • Generous time away and flexible PTO
  • Maternity and paternity leave
  • Access to educational resources with reimbursement for continued learning
  • 401(k) plan with company contribution
Read More
Arrow Right