CrawlJobs Logo

Senior Databricks Developer

realign-llc.com Logo

Realign

Location Icon

Location:
United States

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

200000.00 USD / Year

Job Description:

Must Have Technical/Functional Skills; Roles & Responsibilities

Job Responsibility:

  • Design, develop, and maintain scalable ETL/ELT pipelines using Databricks, PySpark, and Spark SQL
  • Optimize Spark jobs—including partitioning, caching, cluster sizing, shuffle minimization, and cost-efficient workload design
  • Build and manage workflows using Databricks Jobs, Repos, Delta Live Tables, and Unity Catalog
  • Develop and refine DBT models, tests, seeds, macros, and documentation to support standardized transformation layers
  • Implement modular, version-controlled DBT pipelines aligned with data governance and quality practices
  • Partner with data consumers to ensure models align with business definitions, lineage, and auditability
  • Create curated, reusable, and well-governed data assets (gold/silver/bronze layers) for analytics, reporting, and ML use cases
  • Continuously refine and optimize data assets for consistency, reliability, and usability across teams
  • Drive standardization of data patterns, frameworks, and reusable components
  • Identify and implement engineering efficiencies across Databricks and Spark workloads—cluster optimization, code improvements, auto-scaling patterns, and job orchestration enhancements
  • Collaborate with platform engineering to enhance DevOps automation, CI/CD pipelines, and environment management
  • Improve cost governance through workload analysis, optimization, and proactive cost monitoring
  • Conduct Spark job tuning and pipeline performance optimization to improve processing speed and reduce compute spend
  • Troubleshoot production issues and deliver durable fixes that improve long term reliability
  • Implement best practices for Delta Lake performance (ZORDER, auto-optimize, vacuum, retention tuning)
  • Implement end-to-end observability for data pipelines, including logging, metrics, tracing, and alerting
  • Integrate Databricks with monitoring ecosystems (e.g., Azure Monitor, CloudWatch, Datadog)
  • Ensure pipeline SLAs/SLOs are clearly defined and consistently met
  • Work closely with data architects, analysts, business SMEs, and platform teams
  • Provide technical leadership, review code, mentor junior engineers, and advocate for engineering excellence
  • Translate business requirements into scalable, production-quality data solutions

Requirements:

  • 7+ years of experience in Data Engineering, with 3–5+ years on Databricks
  • Advanced proficiency in Apache Spark, PySpark, SQL, and distributed data processing
  • Strong experience with DBT (Core or Cloud) for building robust transformation layers
  • Hands-on expertise in data asset modeling, curation, optimization, and lifecycle management
  • Proven experience with job tuning, performance debugging, and cluster optimization
  • Experience implementing observability solutions for data pipelines
  • Solid understanding of Delta Lake, lakehouse architecture, and data governance
  • Experience with cloud platforms (Azure preferred
  • AWS/GCP acceptable)
  • Strong Git-based development workflows and CI/CD experience

Additional Information:

Job Posted:
March 21, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Databricks Developer

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • in-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking
  • bachelor’s degree in Computer Science, Engineering, Mathematics, or a relevant technical field
  • minimum of 5+ years of experience in Data Engineering, with at least 3+ years of experience working with Databricks and Spark at scale.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake
  • design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables (DLT)
  • implement Unity Catalog for centralized data governance, fine-grained security (row/column-level security), and data lineage
  • develop and manage complex workflows using Databricks Workflows (Jobs) or external tools (Azure Data Factory, Airflow) to automate pipelines
  • integrate Databricks pipelines into CI/CD processes using tools like Git, Databricks Repos, and Bundles
  • work closely with Data Scientists, Analysts, and Architects to deliver optimal technical solutions
  • provide technical guidance and mentorship to junior developers.
What we offer
What we offer
  • Full access to foreign language learning platform
  • personalized access to tech learning platforms
  • tailored workshops and trainings to sustain your growth
  • medical insurance
  • meal tickets
  • monthly budget to allocate on flexible benefit platform
  • access to 7 Card services
  • wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • In-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • Expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • Advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • Advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • Solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • Expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake to ensure data quality, consistency, and historical tracking
  • Efficient implementation of the Lakehouse architecture on Databricks, combining best practices from DWH and Data Lake
  • Optimize Databricks clusters, Spark operations, and Delta tables to reduce latency and computational costs
  • Design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables
  • Implement and manage Unity Catalog for centralized data governance, data security and data lineage
  • Define and implement data quality standards and rules to maintain data integrity
  • Develop and manage complex workflows using Databricks Workflows or external tools to automate pipelines
  • Integrate Databricks pipelines into CI/CD processes
  • Work closely with Data Scientists, Analysts, and Architects to understand business requirements and deliver optimal technical solutions
What we offer
What we offer
  • Full access to foreign language learning platform
  • Personalized access to tech learning platforms
  • Tailored workshops and trainings to sustain your growth
  • Medical insurance
  • Meal tickets
  • Monthly budget to allocate on flexible benefit platform
  • Access to 7 Card services
  • Wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Senior Application Development Engineer

Atlassian Corporate Engineering (ACE) is hiring a Senior Application Development...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Expertise in Oracle Fusion Cloud and Other SaaS systems (Zuora, Coupa, Fieldglass, Anaplan, Avalara)
  • At least 12+ years of industry experience working as a hands-on as an application development engineer
  • Good understanding of application development supporting large organizations. Experience in running transformations would be a huge plus
  • Experience in automation, PaaS and BRE tools (RPA, Workato, Camunda)
  • Exposure to upstream and downstream systems such as SalesForce, Databricks would be a plus
  • Familiarity with AI/ML, and implementation of use cases for any business function
  • Comfortable operating a 24x7 highly reliable system and driving a culture of continuous improvement from the ashes of incidents
  • Strong interpersonal, communication and presentation skills to work with stakeholders and other teams
  • Passion to make a difference everyday
Job Responsibility
Job Responsibility
  • You will be leading parts of run-the-business operations, projects, and partnering across engineering teams to take on company-wide initiatives spanning multiple projects
  • Leverage new Fusion features that comes out after every quarterly upgrade for our Finance team
  • Work on all Core Financial modules (OM, AR, AP, FA, GL) as needed
  • Ensure all your assigned projects are meeting OKRs
  • Implement RPA and low-code/no-code tools to drive efficiency across the Finance organization
  • Integrate, and maintain the integrations between Fusion Cloud and the Finance SaaS footprint
  • Help measure business teams' metrics and outcomes
  • Work with Internal Audit to ensure our systems continue to be SOX compliant
  • Understand Atlassian’s E2E business flow and systems landscape and how upstream changes impact Finance
  • Deliver short term wins while not compromising long term architecture
What we offer
What we offer
  • Health coverage
  • Paid volunteer days
  • Wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We’re hiring a Senior Data Engineer with strong experience in AWS and Databricks...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
appen.com Logo
Appen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5-7 years of hands-on experience with AWS data engineering technologies, such as Amazon Redshift, AWS Glue, AWS Data Pipeline, Amazon Kinesis, Amazon RDS, and Apache Airflow
  • Hands-on experience working with Databricks, including Delta Lake, Apache Spark (Python or Scala), and Unity Catalog
  • Demonstrated proficiency in SQL and NoSQL databases, ETL tools, and data pipeline workflows
  • Experience with Python, and/or Java
  • Deep understanding of data structures, data modeling, and software architecture
  • Strong problem-solving skills and attention to detail
  • Self-motivated and able to work independently, with excellent organizational and multitasking skills
  • Exceptional communication skills, with the ability to explain complex data concepts to non-technical stakeholders
  • Bachelor's Degree in Computer Science, Information Systems, or a related field. A Master's Degree is preferred.
Job Responsibility
Job Responsibility
  • Design, build, and manage large-scale data infrastructures using a variety of AWS technologies such as Amazon Redshift, AWS Glue, Amazon Athena, AWS Data Pipeline, Amazon Kinesis, Amazon EMR, and Amazon RDS
  • Design, develop, and maintain scalable data pipelines and architectures on Databricks using tools such as Delta Lake, Unity Catalog, and Apache Spark (Python or Scala), or similar technologies
  • Integrate Databricks with cloud platforms like AWS to ensure smooth and secure data flow across systems
  • Build and automate CI/CD pipelines for deploying, testing, and monitoring Databricks workflows and data jobs
  • Continuously optimize data workflows for performance, reliability, and security, applying Databricks best practices around data governance and quality
  • Ensure the performance, availability, and security of datasets across the organization, utilizing AWS’s robust suite of tools for data management
  • Collaborate with data scientists, software engineers, product managers, and other key stakeholders to develop data-driven solutions and models
  • Translate complex functional and technical requirements into detailed design proposals and implement them
  • Mentor junior and mid-level data engineers, fostering a culture of continuous learning and improvement within the team
  • Identify, troubleshoot, and resolve complex data-related issues
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

At Rearc, we're committed to empowering engineers to build awesome products and ...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
rearc.io Logo
Rearc
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of experience in data engineering, showcasing expertise in diverse architectures, technology stacks, and use cases
  • Strong expertise in designing and implementing data warehouse and data lake architectures, particularly in AWS environments
  • Extensive experience with Python for data engineering tasks, including familiarity with libraries and frameworks commonly used in Python-based data engineering workflows
  • Proven experience with data pipeline orchestration using platforms such as Airflow, Databricks, DBT or AWS Glue
  • Hands-on experience with data analysis tools and libraries like Pyspark, NumPy, Pandas, or Dask
  • Proficiency with Spark and Databricks is highly desirable
  • Experience with SQL and NoSQL databases, including PostgreSQL, Amazon Redshift, Delta Lake, Iceberg and DynamoDB
  • In-depth knowledge of data architecture principles and best practices, especially in cloud environments
  • Proven experience with AWS services, including expertise in using AWS CLI, SDK, and Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or AWS CDK
  • Exceptional communication skills, capable of clearly articulating complex technical concepts to both technical and non-technical stakeholders
Job Responsibility
Job Responsibility
  • Strategic Data Engineering Leadership: Provide strategic vision and technical leadership in data engineering, guiding the development and execution of advanced data strategies that align with business objectives
  • Architect Data Solutions: Design and architect complex data pipelines and scalable architectures, leveraging advanced tools and frameworks (e.g., Apache Kafka, Kubernetes) to ensure optimal performance and reliability
  • Drive Innovation: Lead the exploration and adoption of new technologies and methodologies in data engineering, driving innovation and continuous improvement across data processes
  • Technical Expertise: Apply deep expertise in ETL processes, data modelling, and data warehousing to optimize data workflows and ensure data integrity and quality
  • Collaboration and Mentorship: Collaborate closely with cross-functional teams to understand requirements and deliver impactful data solutions—mentor and coach junior team members, fostering their growth and development in data engineering practices
  • Thought Leadership: Contribute to thought leadership in the data engineering domain through technical articles, conference presentations, and participation in industry forums
Read More
Arrow Right

Senior Data Engineer

Collaborate with engineering and TPM leaders, developers, and process engineers ...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BS in Computer Science or equivalent experience with 8+ years as a Senior Data Engineer or similar role
  • 10+ Years of progressive experience in building scalable datasets and reliable data engineering practices.
  • Proficiency in Python, SQL, and data platforms like DataBricks
  • Proficiency in relational databases and query authoring (SQL).
  • Demonstrable expertise designing data models for optimal storage and retrieval to meet product and business requirements.
  • Experience building and scaling experimentation practices, statistical methods, and tools in a large scale organization
  • Excellence in building scalable data pipelines using Spark (SparkSQL) with Airflow scheduler/executor framework or similar scheduling tools.
  • Expert experience working with AWS data services or similar Apache projects (Spark, Flink, Hive, and Kafka).
  • Understanding of Data Engineering tools/frameworks and standards to improve the productivity and quality of output for Data Engineers across the team.
  • Well versed in modern software development practices (Agile, TDD, CICD)
Job Responsibility
Job Responsibility
  • Collaborate with engineering and TPM leaders, developers, and process engineers to create data solutions that extract actionable insights from incident and post-incident management data, supporting objectives of incident prevention and reducing detection, mitigation, and communication times.
  • Work with diverse stakeholders to understand their needs and design data models, acquisition processes, and applications that meet those requirements.
  • Add new sources, implement business rules, and generate metrics to empower product analysts and data scientists.
  • Serve as the data domain expert, mastering the details of our incident management infrastructure.
  • Take full ownership of problems from ambiguous requirements through rapid iterations.
  • Enhance data quality by leveraging and refining internal tools and frameworks to automatically detect issues.
  • Cultivate strong relationships between teams that produce data and those that build insights.
  • Fulltime
Read More
Arrow Right

Senior Software Engineer

We are looking for a highly skilled Senior Software Engineer to join our team on...
Location
Location
Ireland , Dublin
Salary
Salary:
Not provided
bentley.com Logo
Bentley Systems
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Expert-level proficiency in C# and .NET Framework/Core, including advanced software design patterns
  • Proven experience building high-performance back-end systems and API-driven architectures
  • Strong hands-on experience with Azure Cloud Services and cloud-native development
  • Practical knowledge of big data ecosystems, including data lakes and distributed data processing
  • Proficiency in Databricks, SQL, and Entity Framework
  • Solid understanding of async/await, Task-based programming, and concurrency patterns
  • Familiarity with Git and modern version control practices
  • Excellent problem-solving skills and ability to work in a fast-paced, agile environment
Job Responsibility
Job Responsibility
  • Architect and develop robust back-end services using C# and .NET, ensuring high performance and scalability
  • Design and optimize big data pipelines and ETL/ELT workflows leveraging Databricks and Azure Data Lake
  • Build and maintain RESTful APIs, integrating seamlessly with internal and external systems
  • Collaborate with DevOps to enhance CI/CD pipelines and deployment automation
  • Deploy and manage containerized applications using Kubernetes (k8s) for scalable cloud environments
  • Ensure code quality and maintainability through Git workflows, code reviews, and automated testing
  • Actively participate in Agile ceremonies, contributing to sprint planning, retrospectives, and daily stand-ups
What we offer
What we offer
  • A great team and culture
  • An exciting career as part of a world-leading software company providing solutions for architecture, engineering, and construction
  • An attractive salary and benefits package designed to reward your expertise
  • A commitment to inclusion, belonging, and well-being through global initiatives and resource groups
  • A mission-driven company dedicated to advancing the world’s infrastructure for a better quality of life
Read More
Arrow Right

Senior Data Architect

Experienced Data Architect to design, develop, and implement comprehensive enter...
Location
Location
India , Noida; Chennai; Bangalore
Salary
Salary:
Not provided
https://www.soprasteria.com Logo
Sopra Steria
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 11+ years of experience in data engineering/management roles
  • 3+ years in enterprise-level data architecture and data governance
  • Proven experience of defining the global architecture of the Single Source of Truth and its three components (Access, Knowledge, Trust)
  • Design and document data models and architecture frameworks aligned with group standards
  • Integrate real-time and streaming (event-driven architecture) capabilities into the overall design
  • Proven experience implementing solutions using Snowflake (Preferable) and Databricks
  • Strong background in Data Governance, Data Quality, and Master data Management (MDM)
  • BE / B.Tech/ MS/ M.Tech/ MCA qualification
Job Responsibility
Job Responsibility
  • Design, develop, and implement enterprise-wide data architecture, models, and integration frameworks
  • Establish and enforce data governance standards, policies, and best practices
  • Design, build and optimize data platforms using Snowflake (preferable) and/or Databricks
  • Oversee and guide complex data transformation for large and diverse datasets, ensuring Data integrity, quality, and performance
  • Drive the creation and maintenance of advanced data models, to support both analytical and operational needs
  • Ensure consistency and reusability of models across business domains and systems
  • Implement MDM, Lineage tracking, and Data cataloguing
  • Guarantee consistency between technical design and trust framework (quality, compliance, security)
  • Collaborate with business stakeholders to align data strategy with organizational objectives, document use cases, and design processes, ensuring alignment with technical implementation
  • Provide guidance to engineering teams for solution design and implementation
  • Fulltime
Read More
Arrow Right