CrawlJobs Logo

Databricks Platform Engineer

supercell.com Logo

Supercell

Location Icon

Location:
Finland , Helsinki

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Are you excited about building a world-class data platform and working with cloud infrastructure that handles petabytes of data? Then let’s talk! Data that we collect plays a central role in how we make decisions and improve our games to serve players the best way possible. Our data platform is designed to make sure the data is correct, up to date, and easily accessible in a secure way. In this position you will be responsible for ensuring that Databricks infrastructure that powers our data platform is rock solid and follows modern best practices. You will be actively contributing to taking our data platform to the next level and solving unique challenges arising from the massive scale. In this role we value a versatile skill set ranging from strong devops experience to uncompromising attitude towards high data quality. We are aiming to strengthen our team with experts passionate about large scale deployments, high-load high-availability systems and real-time data collection. We also expect you to proactively advance our Databricks tech stack and drive improvement discussions in collaboration with the rest of the team as well as data engineers and data analysts across the company.

Job Responsibility:

  • Own and improve the Databricks infrastructure for data collection, storage and processing
  • Implement and manage flexible access controls that don’t compromise user speed and efficiency
  • Proactively suggest and implement improvements that increase scalability, robustness and availability of data systems
  • Stay up to date with new products and services released by Databricks, experiment and help make them part of Supercell’s data platform
  • Participate in 24/7 on-call to maintain batch and real-time data infrastructure
  • Contribute to common data tooling to enhance engineering productivity
  • Together with the rest of the team, develop vision and strategy for the data platform

Requirements:

  • 5+ years of experience in designing, developing and maintaining large-scale data platform in a complex enterprise environment
  • In-depth experience with Databricks infrastructure and services
  • Extensive Infrastructure as Code experience (preferably Terraform)
  • Software development experience (preferably Java or Python)
  • Strong collaboration and communication skills
  • Ability to innovate and work independently
What we offer:
  • Relocation support for you and your family (including pets)
  • Compensation and benefits structured to help you enjoy your time
  • Work environment and resources to succeed while having fun

Additional Information:

Job Posted:
December 12, 2025

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Databricks Platform Engineer

Senior Software Engineer, Data Platform

We are looking for a foundational member of the Data Team to enable Skydio to ma...
Location
Location
United States , San Mateo
Salary
Salary:
180000.00 - 240000.00 USD / Year
skydio.com Logo
Skydio
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of professional experience
  • 2+ years in software engineering
  • 2+ years in data engineering with a bias towards getting your hands dirty
  • Deep experience with Databricks building pipelines, managing datasets, and developing dashboards or analytical applications
  • Proven track record of operating scalable data platforms, defining company-wide patterns that ensure reliability, performance, and cost effectiveness
  • Proficiency in SQL and at least one modern programming language (we use Python)
  • Comfort working across the full data stack — from ingestion and transformation to orchestration and visualization
  • Strong communication skills, with the ability to collaborate effectively across all levels and functions
  • Demonstrated ability to lead technical direction, mentor teammates, and promote engineering excellence and best practices across the organization
  • Familiarity with AI-assisted data workflows, including tools that accelerate data transformations or enable natural-language interfaces for analytics
Job Responsibility
Job Responsibility
  • Design and scale the data infrastructure that ingests live telemetry from tens of thousands of autonomous drones
  • Build and evolve our Databricks and Palantir Foundry environments to empower every Skydian to query data, define jobs, and build dashboards
  • Develop data systems that make our products truly data-driven — from predictive analytics that anticipate hardware failures, to 3D connectivity mapping, to in-depth flight telemetry analysis
  • Create and integrate AI-powered tools for data analysis, transformation, and pipeline generation
  • Champion a data-driven culture by defining and enforcing best practices for data quality, lineage, and governance
  • Collaborate with autonomy, manufacturing, and operations teams to unify how data flows across the company
  • Lead and mentor data engineers, analysts, and stakeholders across Skydio
  • Ensure platform reliability by implementing robust monitoring, observability, and contributing to the on-call rotation for critical data systems
What we offer
What we offer
  • Equity in the form of stock options
  • Comprehensive benefits packages
  • Relocation assistance may also be provided for eligible roles
  • Paid vacation time
  • Sick leave
  • Holiday pay
  • 401K savings plan
  • Fulltime
Read More
Arrow Right

Databricks Engineer

Our client is revolutionizing the field of cell therapy manufacturing by develop...
Location
Location
Salary
Salary:
Not provided
coherentsolutions.com Logo
Coherent Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in Data Engineering with strong technical expertise
  • Proven hands-on experience with the Databricks Data Platform and Delta Lake
  • Experience building and managing Databricks Lakehouse solutions
  • Knowledge of Delta Live Tables or similar frameworks for real-time data ingestion is a strong plus
  • Ability to define processes from scratch and establish development workflows in a new or evolving team
  • Familiarity with data testing best practices and collaboration with QA teams to ensure data quality
  • Strong problem-solving mindset, initiative, and readiness to work in a dynamic, evolving environment
  • Ability to work with a time shift, ensuring overlap with the client until approximately 10:30 AM Pacific Time for meetings and collaboration
  • English level: Upper-Intermediate (written and spoken)
Job Responsibility
Job Responsibility
  • Design, build, and maintain data pipelines using Databricks and Delta Live Tables for real-time and batch data processing
  • Collaborate with cross-functional teams to ensure smooth data flow from diverse log-based sources
  • Participate in both individual and collaborative work, ensuring scalability, reliability, and performance of data solutions
  • Define and implement best practices for data development and deployment processes on the Databricks platform
  • Proactively address technical challenges in a project environment, proposing and implementing effective solutions
What we offer
What we offer
  • Technical and non-technical training for professional and personal growth
  • Internal conferences and meetups to learn from industry experts
  • Support and mentorship from an experienced employee to help you professional grow and development
  • Internal startup incubator
  • Health insurance
  • English courses
  • Sports activities to promote a healthy lifestyle
  • Flexible work options, including remote and hybrid opportunities
  • Referral program for bringing in new talent
  • Work anniversary program and additional vacation days
Read More
Arrow Right

Software Engineer - Platform

We build simple yet innovative consumer products and developer APIs that shape h...
Location
Location
United States , New York
Salary
Salary:
163200.00 - 223200.00 USD / Year
plaid.com Logo
Plaid
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2 to 4 years of software engineering experience, with a proven track record of building and shipping complex backend systems or platforms
  • Experience designing and scaling distributed systems is highly desired
  • Proficiency in at least one general-purpose programming language (e.g. Go, Python, Java, C++)
  • Experience with Go is a plus
  • Deep understanding of system design and algorithms
  • Hands-on experience with designing, building, and operating distributed systems or microservices architectures at scale
  • Familiarity with relational and NoSQL database technologies (for example, MySQL/TiDB, PostgreSQL, MongoDB) and data storage architectures
  • Experience building data pipelines or working with big data processing frameworks (Spark, Databricks, etc.) is a plus
  • Excellent communication and teamwork skills
Job Responsibility
Job Responsibility
  • Design & Develop Scalable Systems: Build and maintain core platform services with a focus on performance, reliability, and scalability
  • Infrastructure & Data Platforms: Develop and improve infrastructure for data storage and processing
  • Developer Productivity Tools: Create internal tools, frameworks, and automation to improve developer productivity and efficiency
  • Security & Privacy by Design: Integrate security, privacy, and compliance best practices into our platforms
  • Cross-Team Collaboration: Work hand-in-hand with product engineers and other stakeholders to understand requirements and translate them into reliable platform capabilities
  • Technical Excellence & Leadership: Uphold high engineering standards through code reviews, testing, and documentation
What we offer
What we offer
  • medical, dental, vision, and 401(k)
  • Fulltime
Read More
Arrow Right

Software Engineer - Platform

We build simple yet innovative consumer products and developer APIs that shape h...
Location
Location
United States , San Francisco
Salary
Salary:
163200.00 - 223200.00 USD / Year
plaid.com Logo
Plaid
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2 to 4 years of software engineering experience, with a proven track record of building and shipping complex backend systems or platforms
  • Experience designing and scaling distributed systems is highly desired
  • Proficiency in at least one general-purpose programming language (e.g. Go, Python, Java, C++)
  • Experience with Go is a plus
  • Deep understanding of system design and algorithms
  • Hands-on experience with designing, building, and operating distributed systems or microservices architectures at scale
  • Ability to debug complex issues in a production environment and optimize system performance and reliability
  • Familiarity with relational and NoSQL database technologies (for example, MySQL/TiDB, PostgreSQL, MongoDB) and data storage architectures
  • Experience building data pipelines or working with big data processing frameworks (Spark, Databricks, etc.) is a plus
  • Excellent communication and teamwork skills, with the ability to work effectively in a cross-functional environment
Job Responsibility
Job Responsibility
  • Design & Develop Scalable Systems: Build and maintain core platform services with a focus on performance, reliability, and scalability
  • Infrastructure & Data Platforms: Develop and improve infrastructure for data storage and processing
  • Developer Productivity Tools: Create internal tools, frameworks, and automation to improve developer productivity and efficiency
  • Security & Privacy by Design: Integrate security, privacy, and compliance best practices into our platforms
  • Cross-Team Collaboration: Work hand-in-hand with product engineers and other stakeholders to understand requirements and translate them into reliable platform capabilities
  • Technical Excellence & Leadership: Uphold high engineering standards through code reviews, testing, and documentation
What we offer
What we offer
  • medical
  • dental
  • vision
  • 401(k)
  • Fulltime
Read More
Arrow Right

Machine Learning Platform / Backend Engineer

We are seeking a Machine Learning Platform/Backend Engineer to design, build, an...
Location
Location
Serbia; Romania , Belgrade; Timișoara
Salary
Salary:
Not provided
everseen.ai Logo
Everseen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4-5+ years of work experience in either ML infrastructure, MLOps, or Platform Engineering
  • Bachelors degree or equivalent focusing on the computer science field is preferred
  • Excellent communication and collaboration skills
  • Expert knowledge of Python
  • Experience with CI/CD tools (e.g., GitLab, Jenkins)
  • Hands-on experience with Kubernetes, Docker, and cloud services
  • Understanding of ML training pipelines, data lifecycle, and model serving concepts
  • Familiarity with workflow orchestration tools (e.g., Airflow, Kubeflow, Ray, Vertex AI, Azure ML)
  • A demonstrated understanding of the ML lifecycle, model versioning, and monitoring
  • Experience with ML frameworks (e.g., TensorFlow, PyTorch)
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable infrastructure that empowers data scientists and machine learning engineers
  • Own the design and implementation of the internal ML platform, enabling end-to-end workflow orchestration, resource management, and automation using cloud-native technologies (GCP/Azure)
  • Design and manage Kubernetes-based infrastructure for multi-tenant GPU and CPU workloads with strong isolation, quota control, and monitoring
  • Integrate and extend orchestration tools (Airflow, Kubeflow, Ray, Vertex AI, Azure ML or custom schedulers) to automate data processing, training, and deployment pipelines
  • Develop shared services for model behavior/performance tracking, data/datasets versioning, and artifact management (MLflow, DVC, or custom registries)
  • Build out documentation in relation to architecture, policies and operations runbooks
  • Share skills, knowledge, and expertise with members of the data engineering team
  • Foster a culture of collaboration and continuous learning by organizing training sessions, workshops, and knowledge-sharing sessions
  • Collaborate and drive progress with cross-functional teams to design and develop new features and functionalities
  • Ensure that the developed solutions meet project objectives and enhance user experience
  • Fulltime
Read More
Arrow Right

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • In-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • Expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • Advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • Advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • Solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • Expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake to ensure data quality, consistency, and historical tracking
  • Efficient implementation of the Lakehouse architecture on Databricks, combining best practices from DWH and Data Lake
  • Optimize Databricks clusters, Spark operations, and Delta tables to reduce latency and computational costs
  • Design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables
  • Implement and manage Unity Catalog for centralized data governance, data security and data lineage
  • Define and implement data quality standards and rules to maintain data integrity
  • Develop and manage complex workflows using Databricks Workflows or external tools to automate pipelines
  • Integrate Databricks pipelines into CI/CD processes
  • Work closely with Data Scientists, Analysts, and Architects to understand business requirements and deliver optimal technical solutions
What we offer
What we offer
  • Full access to foreign language learning platform
  • Personalized access to tech learning platforms
  • Tailored workshops and trainings to sustain your growth
  • Medical insurance
  • Meal tickets
  • Monthly budget to allocate on flexible benefit platform
  • Access to 7 Card services
  • Wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Databricks Engineer

We are seeking a Databricks Engineer to design, build, and operate a Data & AI p...
Location
Location
United States , Leesburg
Salary
Salary:
Not provided
wintrio.com Logo
WINTrio
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Hands-on experience with Databricks, Delta Lake, and Apache Spark
  • Deep understanding of ELT pipeline development, orchestration, and monitoring in cloud-native environments
  • Experience implementing Medallion Architecture (Bronze/Silver/Gold) and working with data versioning and schema enforcement in enterprise grade environments
  • Strong proficiency in SQL, Python, or Scala for data transformations and workflow logic
  • Proven experience integrating enterprise platforms (e.g., PeopleSoft, Salesforce, D2L) into centralized data platforms
  • Familiarity with data governance, lineage tracking, and metadata management tools
Job Responsibility
Job Responsibility
  • Data & AI Platform Engineering (Databricks-Centric): Design, implement, and optimize end-to-end data pipelines on Databricks, following the Medallion Architecture principles
  • Build robust and scalable ETL/ELT pipelines using Apache Spark and Delta Lake to transform raw (bronze) data into trusted curated (silver) and analytics-ready (gold) data layers
  • Operationalize Databricks Workflows for orchestration, dependency management, and pipeline automation
  • Apply schema evolution and data versioning to support agile data development
  • Platform Integration & Data Ingestion: Connect and ingest data from enterprise systems such as PeopleSoft, D2L, and Salesforce using APIs, JDBC, or other integration frameworks
  • Implement connectors and ingestion frameworks that accommodate structured, semi-structured, and unstructured data
  • Design standardized data ingestion processes with automated error handling, retries, and alerting
  • Data Quality, Monitoring, and Governance: Develop data quality checks, validation rules, and anomaly detection mechanisms to ensure data integrity across all layers
  • Integrate monitoring and observability tools (e.g., Databricks metrics, Grafana) to track ETL performance, latency, and failures
  • Implement Unity Catalog or equivalent tools for centralized metadata management, data lineage, and governance policy enforcement
Read More
Arrow Right

Senior Principal Data Platform Software Engineer

We’re looking for a Sr Principal Data Platform Software Engineer (P70) to be a k...
Location
Location
Salary
Salary:
239400.00 - 312550.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 15+ years in Data Engineering, Software Engineering, or related roles, with substantial exposure to big data ecosystems
  • Demonstrated experience building and operating data platforms or large‑scale data services in production
  • Proven track record of building services from the ground up (requirements → design → implementation → deployment → ongoing ownership)
  • Hands‑on experience with AWS, GCP (e.g., compute, storage, data, and streaming services) and cloud‑native architectures
  • Practical experience with big data technologies, such as Databricks, Apache Spark, AWS EMR, Apache Flink, or StarRocks
  • Strong programming skills in one or more of: Kotlin, Scala, Java, Python
  • Experience leading cross‑team technical initiatives and influencing senior stakeholders
  • Experience mentoring Staff/Principal engineers and lifting the technical bar for a team or org
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience
Job Responsibility
Job Responsibility
  • Design, develop and own delivery of high quality big data and analytical platform solutions aiming to solve Atlassian’s needs to support millions of users with optimal cost, minimal latency and maximum reliability
  • Improve and operate large‑scale distributed data systems in the cloud (primarily AWS, with increasing integration with GCP and Kubernetes‑based microservices)
  • Drive the evolution of our high-performance analytical databases and its integrations with products, cloud infrastructures (AWS and GCP) and isolated cloud environments
  • Help define and uplift engineering and operational standards for petabyte scale data platforms, with sub‑second analytic queries and multi‑region availability (coding guidelines, code review practices, observability, incident response, SLIs/SLOs)
  • Partner across multiple product and platform teams (including Analytics, Marketplace/Ecosystem, Core Data Platform, ML Platform, Search, and Oasis/FedRAMP) to deliver company‑wide initiatives that depend on reliable, high‑quality data
  • Act as a technical mentor and multiplier, raising the bar on design quality, code quality, and operational excellence across the broader team
  • Design and implement self‑healing, resilient data platforms with strong observability, fault tolerance, and recovery characteristics
  • Own the long‑term architecture and technical direction of Atlassian’s product data platform with projects that are directly tied to Atlassian’s company-level OKRs
  • Be accountable for the reliability, cost efficiency, and strategic direction of Atlassian’s product analytical data platform
  • Partner with executives and influence senior leaders to align engineering efforts with Atlassian’s long-term business objectives
What we offer
What we offer
  • health and wellbeing resources
  • paid volunteer days
  • Fulltime
Read More
Arrow Right