CrawlJobs Logo

Senior Databricks Data Engineer

https://www.inetum.com Logo

Inetum

Location Icon

Location:
Romania , Bucharest

Category Icon

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

Not provided

Job Description:

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakehouse solutions using the Databricks platform to ensure a scalable, high-performance, and governed data foundation for analytics, reporting, and Machine Learning.

Job Responsibility:

  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake
  • design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables (DLT)
  • implement Unity Catalog for centralized data governance, fine-grained security (row/column-level security), and data lineage
  • develop and manage complex workflows using Databricks Workflows (Jobs) or external tools (Azure Data Factory, Airflow) to automate pipelines
  • integrate Databricks pipelines into CI/CD processes using tools like Git, Databricks Repos, and Bundles
  • work closely with Data Scientists, Analysts, and Architects to deliver optimal technical solutions
  • provide technical guidance and mentorship to junior developers.

Requirements:

  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • in-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking
  • bachelor’s degree in Computer Science, Engineering, Mathematics, or a relevant technical field
  • minimum of 5+ years of experience in Data Engineering, with at least 3+ years of experience working with Databricks and Spark at scale.

Nice to have:

  • Hands-on experience with implementing and managing Unity Catalog
  • experience with Delta Live Tables (DLT) and Databricks Workflows
  • understanding of basic MLOps concepts and experience with MLflow to facilitate integration with Data Science teams
  • experience with Terraform or equivalent tools for Infrastructure as Code (IaC)
  • Databricks certifications (e.g., Databricks Certified Data Engineer Professional).
What we offer:
  • Full access to foreign language learning platform
  • personalized access to tech learning platforms
  • tailored workshops and trainings to sustain your growth
  • medical insurance
  • meal tickets
  • monthly budget to allocate on flexible benefit platform
  • access to 7 Card services
  • wellbeing activities and gatherings.

Additional Information:

Job Posted:
November 21, 2025

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Databricks Data Engineer

Senior ML Data Engineer

As a Senior Data Engineer, you will play a pivotal role in our AI/ML workstream,...
Location
Location
Poland , Warsaw
Salary
Salary:
Not provided
awin.com Logo
Awin Global
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor or Master’s degree in data science, data engineering, Computer Science with focus on math and statistics / Master’s degree is preferred
  • At least 5 years experience as AI/ML data engineer undertaking above task and accountabilities
  • Strong foundation in computer science principes and statistical methods
  • Strong experience with cloud technology (AWS or Azure)
  • Strong experience with creation of data ingestion pipeline and ET process
  • Strong knowledge of big data tool such as Spark, Databricks and Python
  • Strong understanding of common machine learning techniques and frameworks (e.g. mlflow)
  • Strong knowledge of Natural language processing (NPL) concepts
  • Strong knowledge of scrum practices and agile mindset
  • Strong Analytical and Problem-Solving Skills with attention to data quality and accuracy
Job Responsibility
Job Responsibility
  • Design and maintain scalable data pipelines and storage systems for both agentic and traditional ML workloads
  • Productionise LLM- and agent-based workflows, ensuring reliability, observability, and performance
  • Build and maintain feature stores, vector/embedding stores, and core data assets for ML
  • Develop and manage end-to-end traditional ML pipelines: data prep, training, validation, deployment, and monitoring
  • Implement data quality checks, drift detection, and automated retraining processes
  • Optimise cost, latency, and performance across all AI/ML infrastructure
  • Collaborate with data scientists and engineers to deliver production-ready ML and AI systems
  • Ensure AI/ML systems meet governance, security, and compliance requirements
  • Mentor teams and drive innovation across both agentic and classical ML engineering practices
  • Participate in team meetings and contribute to project planning and strategy discussions
What we offer
What we offer
  • Flexi-Week and Work-Life Balance: We prioritise your mental health and well-being, offering you a flexible four-day Flexi-Week at full pay and with no reduction to your annual holiday allowance. We also offer a variety of different paid special leaves as well as volunteer days
  • Remote Working Allowance: You will receive a monthly allowance to cover part of your running costs. In addition, we will support you in setting up your remote workspace appropriately
  • Pension: Awin offers access to an additional pension insurance to all employees in Germany
  • Flexi-Office: We offer an international culture and flexibility through our Flexi-Office and hybrid/remote work possibilities to work across Awin regions
  • Development: We’ve built our extensive training suite Awin Academy to cover a wide range of skills that nurture you professionally and personally, with trainings conveniently packaged together to support your overall development
  • Appreciation: Thank and reward colleagues by sending them a voucher through our peer-to-peer program
Read More
Arrow Right

Senior Data Engineer

Our client is a global jewelry manufacturer undergoing a major transformation, m...
Location
Location
Poland , Wroclaw
Salary
Salary:
Not provided
zoolatech.com Logo
Zoolatech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience as a Data Engineer with proven expertise in Azure Synapse Analytics and SQL Server
  • Advanced proficiency in SQL, covering relational databases, data warehousing, dimensional modeling, and cubes
  • Practical experience with Azure Data Factory, Databricks, and PySpark
  • Track record of designing, building, and delivering production-ready data products at enterprise scale
  • Strong analytical skills and ability to translate business requirements into technical solutions
  • Excellent communication skills in English, with the ability to adapt technical details for different audiences
  • Experience working in Agile/Scrum teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable, efficient, and reusable data pipelines and products on the Azure PaaS data platform
  • Collaborate with product owners, architects, and business stakeholders to translate requirements into technical designs and data models
  • Enable advanced analytics, reporting, and other data-driven use cases that support commercial initiatives and operational efficiencies
  • Ingest, transform, and optimize large, complex data sets while ensuring data quality, reliability, and performance
  • Apply DevOps practices, CI/CD pipelines, and coding best practices to ensure robust, production-ready solutions
  • Monitor and own the stability of delivered data products, ensuring continuous improvements and measurable business benefits
  • Promote a “build-once, consume-many” approach to maximize reuse and value creation across business verticals
  • Contribute to a culture of innovation by following best practices while exploring new ways to push the boundaries of data engineering
What we offer
What we offer
  • Paid Vacation
  • Sick Days
  • Sport/Insurance Compensation
  • English Classes
  • Charity
  • Training Compensation
Read More
Arrow Right

Senior Data Engineer

Our client is a global jewelry manufacturer undergoing a major transformation, m...
Location
Location
Turkey , Istanbul
Salary
Salary:
Not provided
zoolatech.com Logo
Zoolatech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience as a Data Engineer with proven expertise in Azure Synapse Analytics and SQL Server
  • Advanced proficiency in SQL, covering relational databases, data warehousing, dimensional modeling, and cubes
  • Practical experience with Azure Data Factory, Databricks, and PySpark
  • Track record of designing, building, and delivering production-ready data products at enterprise scale
  • Strong analytical skills and ability to translate business requirements into technical solutions
  • Excellent communication skills in English, with the ability to adapt technical details for different audiences
  • Experience working in Agile/Scrum teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable, efficient, and reusable data pipelines and products on the Azure PaaS data platform
  • Collaborate with product owners, architects, and business stakeholders to translate requirements into technical designs and data models
  • Enable advanced analytics, reporting, and other data-driven use cases that support commercial initiatives and operational efficiencies
  • Ingest, transform, and optimize large, complex data sets while ensuring data quality, reliability, and performance
  • Apply DevOps practices, CI/CD pipelines, and coding best practices to ensure robust, production-ready solutions
  • Monitor and own the stability of delivered data products, ensuring continuous improvements and measurable business benefits
  • Promote a “build-once, consume-many” approach to maximize reuse and value creation across business verticals
  • Contribute to a culture of innovation by following best practices while exploring new ways to push the boundaries of data engineering
What we offer
What we offer
  • Paid Vacation
  • Hybrid Work (home/office)
  • Sick Days
  • Sport/Insurance Compensation
  • Holidays Day Off
  • English Classes
  • Training Compensation
  • Transportation compensation
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer – Dublin (Hybrid) Contract Role | 3 Days Onsite. We are see...
Location
Location
Ireland , Dublin
Salary
Salary:
Not provided
solasit.ie Logo
Solas IT Recruitment
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience as a Data Engineer working with distributed data systems
  • 4+ years of deep Snowflake experience, including performance tuning, SQL optimization, and data modelling
  • Strong hands-on experience with the Hadoop ecosystem: HDFS, Hive, Impala, Spark (PySpark preferred)
  • Oozie, Airflow, or similar orchestration tools
  • Proven expertise with PySpark, Spark SQL, and large-scale data processing patterns
  • Experience with Databricks and Delta Lake (or equivalent big-data platforms)
  • Strong programming background in Python, Scala, or Java
  • Experience with cloud services (AWS preferred): S3, Glue, EMR, Redshift, Lambda, Athena, etc.
Job Responsibility
Job Responsibility
  • Build, enhance, and maintain large-scale ETL/ELT pipelines using Hadoop ecosystem tools including HDFS, Hive, Impala, and Oozie/Airflow
  • Develop distributed data processing solutions with PySpark, Spark SQL, Scala, or Python to support complex data transformations
  • Implement scalable and secure data ingestion frameworks to support both batch and streaming workloads
  • Work hands-on with Snowflake to design performant data models, optimize queries, and establish solid data governance practices
  • Collaborate on the migration and modernization of current big-data workloads to cloud-native platforms and Databricks
  • Tune Hadoop, Spark, and Snowflake systems for performance, storage efficiency, and reliability
  • Apply best practices in data modelling, partitioning strategies, and job orchestration for large datasets
  • Integrate metadata management, lineage tracking, and governance standards across the platform
  • Build automated validation frameworks to ensure accuracy, completeness, and reliability of data pipelines
  • Develop unit, integration, and end-to-end testing for ETL workflows using Python, Spark, and dbt testing where applicable
Read More
Arrow Right

Senior Software Engineer, Data Platform

We are looking for a foundational member of the Data Team to enable Skydio to ma...
Location
Location
United States , San Mateo
Salary
Salary:
180000.00 - 240000.00 USD / Year
skydio.com Logo
Skydio
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of professional experience
  • 2+ years in software engineering
  • 2+ years in data engineering with a bias towards getting your hands dirty
  • Deep experience with Databricks building pipelines, managing datasets, and developing dashboards or analytical applications
  • Proven track record of operating scalable data platforms, defining company-wide patterns that ensure reliability, performance, and cost effectiveness
  • Proficiency in SQL and at least one modern programming language (we use Python)
  • Comfort working across the full data stack — from ingestion and transformation to orchestration and visualization
  • Strong communication skills, with the ability to collaborate effectively across all levels and functions
  • Demonstrated ability to lead technical direction, mentor teammates, and promote engineering excellence and best practices across the organization
  • Familiarity with AI-assisted data workflows, including tools that accelerate data transformations or enable natural-language interfaces for analytics
Job Responsibility
Job Responsibility
  • Design and scale the data infrastructure that ingests live telemetry from tens of thousands of autonomous drones
  • Build and evolve our Databricks and Palantir Foundry environments to empower every Skydian to query data, define jobs, and build dashboards
  • Develop data systems that make our products truly data-driven — from predictive analytics that anticipate hardware failures, to 3D connectivity mapping, to in-depth flight telemetry analysis
  • Create and integrate AI-powered tools for data analysis, transformation, and pipeline generation
  • Champion a data-driven culture by defining and enforcing best practices for data quality, lineage, and governance
  • Collaborate with autonomy, manufacturing, and operations teams to unify how data flows across the company
  • Lead and mentor data engineers, analysts, and stakeholders across Skydio
  • Ensure platform reliability by implementing robust monitoring, observability, and contributing to the on-call rotation for critical data systems
What we offer
What we offer
  • Equity in the form of stock options
  • Comprehensive benefits packages
  • Relocation assistance may also be provided for eligible roles
  • Paid vacation time
  • Sick leave
  • Holiday pay
  • 401K savings plan
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a foundational member of the Data Team to enable Skydio to ma...
Location
Location
United States , San Mateo
Salary
Salary:
170000.00 - 230000.00 USD / Year
skydio.com Logo
Skydio
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of professional experience
  • 2+ years in software engineering
  • 2+ years in data engineering with a bias towards getting your hands dirty
  • Deep experience with Databricks or Palantir Foundry, including building pipelines, managing datasets, and developing dashboards or analytical applications
  • Proven track record of operating scalable data platforms, defining company-wide patterns that ensure reliability, performance, and cost effectiveness
  • Proficiency in SQL and at least one modern programming language (for example, Python or Java)
  • Strong communication skills, with the ability to collaborate effectively across all levels and functions
  • Demonstrated ability to lead technical direction, mentor teammates, and promote engineering excellence and best practices across the organization
  • Familiarity with AI-assisted data workflows, including tools that accelerate data transformations or enable natural-language interfaces for analytics
Job Responsibility
Job Responsibility
  • Design and scale the data infrastructure that ingests live telemetry from tens of thousands of autonomous drones
  • Build and evolve our Databricks and Palantir Foundry environments
  • Develop data systems that make our products truly data-driven
  • Create and integrate AI-powered tools for data analysis, transformation, and pipeline generation
  • Champion a data-driven culture by defining and enforcing best practices for data quality, lineage, and governance
  • Collaborate with autonomy, manufacturing, and operations teams to unify how data flows across the company
  • Lead and mentor data engineers, analysts, and stakeholders across Skydio
  • Ensure platform reliability by implementing robust monitoring, observability, and contributing to the on-call rotation for critical data systems
What we offer
What we offer
  • Equity in the form of stock options
  • Comprehensive benefits packages
  • Relocation assistance may also be provided for eligible roles
  • Group health insurance plans
  • Paid vacation time
  • Sick leave
  • Holiday pay
  • 401K savings plan
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

At Rearc, we're committed to empowering engineers to build awesome products and ...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
rearc.io Logo
Rearc
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of experience in data engineering, showcasing expertise in diverse architectures, technology stacks, and use cases
  • Strong expertise in designing and implementing data warehouse and data lake architectures, particularly in AWS environments
  • Extensive experience with Python for data engineering tasks, including familiarity with libraries and frameworks commonly used in Python-based data engineering workflows
  • Proven experience with data pipeline orchestration using platforms such as Airflow, Databricks, DBT or AWS Glue
  • Hands-on experience with data analysis tools and libraries like Pyspark, NumPy, Pandas, or Dask
  • Proficiency with Spark and Databricks is highly desirable
  • Experience with SQL and NoSQL databases, including PostgreSQL, Amazon Redshift, Delta Lake, Iceberg and DynamoDB
  • In-depth knowledge of data architecture principles and best practices, especially in cloud environments
  • Proven experience with AWS services, including expertise in using AWS CLI, SDK, and Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or AWS CDK
  • Exceptional communication skills, capable of clearly articulating complex technical concepts to both technical and non-technical stakeholders
Job Responsibility
Job Responsibility
  • Strategic Data Engineering Leadership: Provide strategic vision and technical leadership in data engineering, guiding the development and execution of advanced data strategies that align with business objectives
  • Architect Data Solutions: Design and architect complex data pipelines and scalable architectures, leveraging advanced tools and frameworks (e.g., Apache Kafka, Kubernetes) to ensure optimal performance and reliability
  • Drive Innovation: Lead the exploration and adoption of new technologies and methodologies in data engineering, driving innovation and continuous improvement across data processes
  • Technical Expertise: Apply deep expertise in ETL processes, data modelling, and data warehousing to optimize data workflows and ensure data integrity and quality
  • Collaboration and Mentorship: Collaborate closely with cross-functional teams to understand requirements and deliver impactful data solutions—mentor and coach junior team members, fostering their growth and development in data engineering practices
  • Thought Leadership: Contribute to thought leadership in the data engineering domain through technical articles, conference presentations, and participation in industry forums
Read More
Arrow Right

Senior Data Engineer

As a Senior Data Engineer at Rearc, you'll play a pivotal role in establishing a...
Location
Location
United States , New York
Salary
Salary:
160000.00 - 200000.00 USD / Year
rearc.io Logo
Rearc
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of professional experience in data engineering across modern cloud architectures and diverse data systems
  • Expertise in designing and implementing data warehouses and data lakes across modern cloud environments (e.g., AWS, Azure, or GCP), with experience in technologies such as Redshift, BigQuery, Snowflake, Delta Lake, or Iceberg
  • Strong Python experience for data engineering, including libraries like Pandas, PySpark, NumPy, or Dask
  • Hands-on experience with Spark and Databricks (highly desirable)
  • Experience building and orchestrating data pipelines using Airflow, Databricks, DBT, or AWS Glue
  • Strong SQL skills and experience with both SQL and NoSQL databases (PostgreSQL, DynamoDB, Redshift, Delta Lake, Iceberg)
  • Solid understanding of data architecture principles, data modeling, and best practices for scalable data systems
  • Experience with cloud provider services (AWS, Azure, or GCP) and comfort using command-line interfaces or SDKs as part of development workflows
  • Familiarity with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, ARM/Bicep, or AWS CDK
  • Excellent communication skills, able to explain technical concepts to technical and non-technical stakeholders
Job Responsibility
Job Responsibility
  • Provide strategic data engineering leadership by shaping the vision, roadmap, and technical direction of data initiatives to align with business goals
  • Architect and build scalable, reliable data solutions, including complex data pipelines and distributed systems, using modern frameworks and technologies (e.g., Spark, Kafka, Kubernetes, Databricks, DBT)
  • Drive innovation by evaluating, proposing, and adopting new tools, patterns, and methodologies that improve data quality, performance, and efficiency
  • Apply deep technical expertise in ETL/ELT design, data modeling, data warehousing, and workflow optimization to ensure robust, high-quality data systems
  • Collaborate across teams—partner with engineering, product, analytics, and customer stakeholders to understand requirements and deliver impactful, scalable solutions
  • Mentor and coach junior engineers, fostering growth, knowledge-sharing, and best practices within the data engineering team
  • Contribute to thought leadership through knowledge-sharing, writing technical articles, speaking at meetups or conferences, or representing the team in industry conversations
What we offer
What we offer
  • Health Benefits
  • Generous time away
  • Maternity and Paternity leave
  • Educational resources and reimbursements
  • 401(k) plan with a company contribution
  • Fulltime
Read More
Arrow Right