CrawlJobs Logo

Databricks Platform Engineer

signifytechnology.com Logo

Signify Technology

Location Icon

Location:
United States , Iselin

Category Icon

Job Type Icon

Contract Type:
B2B

Salary Icon

Salary:

Not provided

Job Responsibility:

  • Collaborate with stakeholders during requirements clarification and sprint planning to ensure alignment with business objectives
  • Design and implement technical solutions on Lakehouse platform (Databricks), including: Prototyping new Databricks capabilities
  • Exposing these capabilities to support Data Products strategy, and Data & AI ecosystem
  • Integrate data platforms with enterprise tools, including: Incident and monitoring systems (e.g., ServiceNow)
  • Identity management solutions
  • Data observability tools (e.g., Dynatrace)
  • Develop and maintain unit and integration tests to ensure quality and resilience
  • Support QA teams during acceptance testing
  • Act as a third-line engineer for production incidents, ensuring system stability and uptime
  • Collaborate with cloud and infrastructure teams to deliver secure, scalable, and reliable solutions

Requirements:

  • Expert knowledge of Databricks
  • Proficient in PySpark for distributed computing
  • Python for library development
  • Advanced SQL skills for complex query optimisation (e.g., Oracle, MS SQL)
  • Experience with Git for version control
  • Familiarity with monitoring tools (e.g., ServiceNow, Prometheus, Grafana)
  • Knowledge of scheduling tools (e.g., Stonebranch, Control-M, Airflow)
  • Proficiency in data quality frameworks (e.g., Great Expectations, ideally Monte Carlo)
  • Solid understanding of cloud infrastructure fundamentals (DNS, certificates, identity, load balancing)
  • Comfortable with sprint planning, stand-ups, and retrospectives
  • Skilled in Azure DevOps for project management
  • Strong debugging and troubleshooting skills for complex data engineering issues
  • Exceptional written and verbal skills, able to explain technical concepts to non-technical stakeholders

Additional Information:

Job Posted:
January 20, 2026

Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Databricks Platform Engineer

Databricks Platform Engineer

Are you excited about building a world-class data platform and working with clou...
Location
Location
Finland , Helsinki
Salary
Salary:
Not provided
supercell.com Logo
Supercell
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in designing, developing and maintaining large-scale data platform in a complex enterprise environment
  • In-depth experience with Databricks infrastructure and services
  • Extensive Infrastructure as Code experience (preferably Terraform)
  • Software development experience (preferably Java or Python)
  • Strong collaboration and communication skills
  • Ability to innovate and work independently
Job Responsibility
Job Responsibility
  • Own and improve the Databricks infrastructure for data collection, storage and processing
  • Implement and manage flexible access controls that don’t compromise user speed and efficiency
  • Proactively suggest and implement improvements that increase scalability, robustness and availability of data systems
  • Stay up to date with new products and services released by Databricks, experiment and help make them part of Supercell’s data platform
  • Participate in 24/7 on-call to maintain batch and real-time data infrastructure
  • Contribute to common data tooling to enhance engineering productivity
  • Together with the rest of the team, develop vision and strategy for the data platform
What we offer
What we offer
  • Relocation support for you and your family (including pets)
  • Compensation and benefits structured to help you enjoy your time
  • Work environment and resources to succeed while having fun
  • Fulltime
Read More
Arrow Right

Senior Software Engineer, Data Platform

We are looking for a foundational member of the Data Team to enable Skydio to ma...
Location
Location
United States , San Mateo
Salary
Salary:
180000.00 - 240000.00 USD / Year
skydio.com Logo
Skydio
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of professional experience
  • 2+ years in software engineering
  • 2+ years in data engineering with a bias towards getting your hands dirty
  • Deep experience with Databricks building pipelines, managing datasets, and developing dashboards or analytical applications
  • Proven track record of operating scalable data platforms, defining company-wide patterns that ensure reliability, performance, and cost effectiveness
  • Proficiency in SQL and at least one modern programming language (we use Python)
  • Comfort working across the full data stack — from ingestion and transformation to orchestration and visualization
  • Strong communication skills, with the ability to collaborate effectively across all levels and functions
  • Demonstrated ability to lead technical direction, mentor teammates, and promote engineering excellence and best practices across the organization
  • Familiarity with AI-assisted data workflows, including tools that accelerate data transformations or enable natural-language interfaces for analytics
Job Responsibility
Job Responsibility
  • Design and scale the data infrastructure that ingests live telemetry from tens of thousands of autonomous drones
  • Build and evolve our Databricks and Palantir Foundry environments to empower every Skydian to query data, define jobs, and build dashboards
  • Develop data systems that make our products truly data-driven — from predictive analytics that anticipate hardware failures, to 3D connectivity mapping, to in-depth flight telemetry analysis
  • Create and integrate AI-powered tools for data analysis, transformation, and pipeline generation
  • Champion a data-driven culture by defining and enforcing best practices for data quality, lineage, and governance
  • Collaborate with autonomy, manufacturing, and operations teams to unify how data flows across the company
  • Lead and mentor data engineers, analysts, and stakeholders across Skydio
  • Ensure platform reliability by implementing robust monitoring, observability, and contributing to the on-call rotation for critical data systems
What we offer
What we offer
  • Equity in the form of stock options
  • Comprehensive benefits packages
  • Relocation assistance may also be provided for eligible roles
  • Paid vacation time
  • Sick leave
  • Holiday pay
  • 401K savings plan
  • Fulltime
Read More
Arrow Right

Databricks Engineer

Our client is revolutionizing the field of cell therapy manufacturing by develop...
Location
Location
Salary
Salary:
Not provided
coherentsolutions.com Logo
Coherent Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in Data Engineering with strong technical expertise
  • Proven hands-on experience with the Databricks Data Platform and Delta Lake
  • Experience building and managing Databricks Lakehouse solutions
  • Knowledge of Delta Live Tables or similar frameworks for real-time data ingestion is a strong plus
  • Ability to define processes from scratch and establish development workflows in a new or evolving team
  • Familiarity with data testing best practices and collaboration with QA teams to ensure data quality
  • Strong problem-solving mindset, initiative, and readiness to work in a dynamic, evolving environment
  • Ability to work with a time shift, ensuring overlap with the client until approximately 10:30 AM Pacific Time for meetings and collaboration
  • English level: Upper-Intermediate (written and spoken)
Job Responsibility
Job Responsibility
  • Design, build, and maintain data pipelines using Databricks and Delta Live Tables for real-time and batch data processing
  • Collaborate with cross-functional teams to ensure smooth data flow from diverse log-based sources
  • Participate in both individual and collaborative work, ensuring scalability, reliability, and performance of data solutions
  • Define and implement best practices for data development and deployment processes on the Databricks platform
  • Proactively address technical challenges in a project environment, proposing and implementing effective solutions
What we offer
What we offer
  • Technical and non-technical training for professional and personal growth
  • Internal conferences and meetups to learn from industry experts
  • Support and mentorship from an experienced employee to help you professional grow and development
  • Internal startup incubator
  • Health insurance
  • English courses
  • Sports activities to promote a healthy lifestyle
  • Flexible work options, including remote and hybrid opportunities
  • Referral program for bringing in new talent
  • Work anniversary program and additional vacation days
Read More
Arrow Right

Software Engineer - Platform

We build simple yet innovative consumer products and developer APIs that shape h...
Location
Location
United States , New York
Salary
Salary:
163200.00 - 223200.00 USD / Year
plaid.com Logo
Plaid
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2 to 4 years of software engineering experience, with a proven track record of building and shipping complex backend systems or platforms
  • Experience designing and scaling distributed systems is highly desired
  • Proficiency in at least one general-purpose programming language (e.g. Go, Python, Java, C++)
  • Experience with Go is a plus
  • Deep understanding of system design and algorithms
  • Hands-on experience with designing, building, and operating distributed systems or microservices architectures at scale
  • Familiarity with relational and NoSQL database technologies (for example, MySQL/TiDB, PostgreSQL, MongoDB) and data storage architectures
  • Experience building data pipelines or working with big data processing frameworks (Spark, Databricks, etc.) is a plus
  • Excellent communication and teamwork skills
Job Responsibility
Job Responsibility
  • Design & Develop Scalable Systems: Build and maintain core platform services with a focus on performance, reliability, and scalability
  • Infrastructure & Data Platforms: Develop and improve infrastructure for data storage and processing
  • Developer Productivity Tools: Create internal tools, frameworks, and automation to improve developer productivity and efficiency
  • Security & Privacy by Design: Integrate security, privacy, and compliance best practices into our platforms
  • Cross-Team Collaboration: Work hand-in-hand with product engineers and other stakeholders to understand requirements and translate them into reliable platform capabilities
  • Technical Excellence & Leadership: Uphold high engineering standards through code reviews, testing, and documentation
What we offer
What we offer
  • medical, dental, vision, and 401(k)
  • Fulltime
Read More
Arrow Right

Software Engineer - Platform

We build simple yet innovative consumer products and developer APIs that shape h...
Location
Location
United States , San Francisco
Salary
Salary:
163200.00 - 223200.00 USD / Year
plaid.com Logo
Plaid
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2 to 4 years of software engineering experience, with a proven track record of building and shipping complex backend systems or platforms
  • Experience designing and scaling distributed systems is highly desired
  • Proficiency in at least one general-purpose programming language (e.g. Go, Python, Java, C++)
  • Experience with Go is a plus
  • Deep understanding of system design and algorithms
  • Hands-on experience with designing, building, and operating distributed systems or microservices architectures at scale
  • Ability to debug complex issues in a production environment and optimize system performance and reliability
  • Familiarity with relational and NoSQL database technologies (for example, MySQL/TiDB, PostgreSQL, MongoDB) and data storage architectures
  • Experience building data pipelines or working with big data processing frameworks (Spark, Databricks, etc.) is a plus
  • Excellent communication and teamwork skills, with the ability to work effectively in a cross-functional environment
Job Responsibility
Job Responsibility
  • Design & Develop Scalable Systems: Build and maintain core platform services with a focus on performance, reliability, and scalability
  • Infrastructure & Data Platforms: Develop and improve infrastructure for data storage and processing
  • Developer Productivity Tools: Create internal tools, frameworks, and automation to improve developer productivity and efficiency
  • Security & Privacy by Design: Integrate security, privacy, and compliance best practices into our platforms
  • Cross-Team Collaboration: Work hand-in-hand with product engineers and other stakeholders to understand requirements and translate them into reliable platform capabilities
  • Technical Excellence & Leadership: Uphold high engineering standards through code reviews, testing, and documentation
What we offer
What we offer
  • medical
  • dental
  • vision
  • 401(k)
  • Fulltime
Read More
Arrow Right

Machine Learning Platform / Backend Engineer

We are seeking a Machine Learning Platform/Backend Engineer to design, build, an...
Location
Location
Serbia; Romania , Belgrade; Timișoara
Salary
Salary:
Not provided
everseen.ai Logo
Everseen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4-5+ years of work experience in either ML infrastructure, MLOps, or Platform Engineering
  • Bachelors degree or equivalent focusing on the computer science field is preferred
  • Excellent communication and collaboration skills
  • Expert knowledge of Python
  • Experience with CI/CD tools (e.g., GitLab, Jenkins)
  • Hands-on experience with Kubernetes, Docker, and cloud services
  • Understanding of ML training pipelines, data lifecycle, and model serving concepts
  • Familiarity with workflow orchestration tools (e.g., Airflow, Kubeflow, Ray, Vertex AI, Azure ML)
  • A demonstrated understanding of the ML lifecycle, model versioning, and monitoring
  • Experience with ML frameworks (e.g., TensorFlow, PyTorch)
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable infrastructure that empowers data scientists and machine learning engineers
  • Own the design and implementation of the internal ML platform, enabling end-to-end workflow orchestration, resource management, and automation using cloud-native technologies (GCP/Azure)
  • Design and manage Kubernetes-based infrastructure for multi-tenant GPU and CPU workloads with strong isolation, quota control, and monitoring
  • Integrate and extend orchestration tools (Airflow, Kubeflow, Ray, Vertex AI, Azure ML or custom schedulers) to automate data processing, training, and deployment pipelines
  • Develop shared services for model behavior/performance tracking, data/datasets versioning, and artifact management (MLflow, DVC, or custom registries)
  • Build out documentation in relation to architecture, policies and operations runbooks
  • Share skills, knowledge, and expertise with members of the data engineering team
  • Foster a culture of collaboration and continuous learning by organizing training sessions, workshops, and knowledge-sharing sessions
  • Collaborate and drive progress with cross-functional teams to design and develop new features and functionalities
  • Ensure that the developed solutions meet project objectives and enhance user experience
  • Fulltime
Read More
Arrow Right

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • In-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • Expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • Advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • Advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • Solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • Expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake to ensure data quality, consistency, and historical tracking
  • Efficient implementation of the Lakehouse architecture on Databricks, combining best practices from DWH and Data Lake
  • Optimize Databricks clusters, Spark operations, and Delta tables to reduce latency and computational costs
  • Design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables
  • Implement and manage Unity Catalog for centralized data governance, data security and data lineage
  • Define and implement data quality standards and rules to maintain data integrity
  • Develop and manage complex workflows using Databricks Workflows or external tools to automate pipelines
  • Integrate Databricks pipelines into CI/CD processes
  • Work closely with Data Scientists, Analysts, and Architects to understand business requirements and deliver optimal technical solutions
What we offer
What we offer
  • Full access to foreign language learning platform
  • Personalized access to tech learning platforms
  • Tailored workshops and trainings to sustain your growth
  • Medical insurance
  • Meal tickets
  • Monthly budget to allocate on flexible benefit platform
  • Access to 7 Card services
  • Wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Databricks Engineer

We are seeking a Databricks Engineer to design, build, and operate a Data & AI p...
Location
Location
United States , Leesburg
Salary
Salary:
Not provided
wintrio.com Logo
WINTrio
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Hands-on experience with Databricks, Delta Lake, and Apache Spark
  • Deep understanding of ELT pipeline development, orchestration, and monitoring in cloud-native environments
  • Experience implementing Medallion Architecture (Bronze/Silver/Gold) and working with data versioning and schema enforcement in enterprise grade environments
  • Strong proficiency in SQL, Python, or Scala for data transformations and workflow logic
  • Proven experience integrating enterprise platforms (e.g., PeopleSoft, Salesforce, D2L) into centralized data platforms
  • Familiarity with data governance, lineage tracking, and metadata management tools
Job Responsibility
Job Responsibility
  • Data & AI Platform Engineering (Databricks-Centric): Design, implement, and optimize end-to-end data pipelines on Databricks, following the Medallion Architecture principles
  • Build robust and scalable ETL/ELT pipelines using Apache Spark and Delta Lake to transform raw (bronze) data into trusted curated (silver) and analytics-ready (gold) data layers
  • Operationalize Databricks Workflows for orchestration, dependency management, and pipeline automation
  • Apply schema evolution and data versioning to support agile data development
  • Platform Integration & Data Ingestion: Connect and ingest data from enterprise systems such as PeopleSoft, D2L, and Salesforce using APIs, JDBC, or other integration frameworks
  • Implement connectors and ingestion frameworks that accommodate structured, semi-structured, and unstructured data
  • Design standardized data ingestion processes with automated error handling, retries, and alerting
  • Data Quality, Monitoring, and Governance: Develop data quality checks, validation rules, and anomaly detection mechanisms to ensure data integrity across all layers
  • Integrate monitoring and observability tools (e.g., Databricks metrics, Grafana) to track ETL performance, latency, and failures
  • Implement Unity Catalog or equivalent tools for centralized metadata management, data lineage, and governance policy enforcement
Read More
Arrow Right