CrawlJobs Logo

Staff Data Engineer

adevinta.com Logo

Adevinta

Location Icon

Location:
Netherlands , Amsterdam

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

In the Marktplaats data and analytics teams, data is at the heart of everything we do. As a Data Engineer of the Data Platform team at Marktplaats you will be relied on to independently develop and deliver high-quality features for our new Data/ML Platform, refactor and translate our data products and finish various tasks to a high standard. You will be the cornerstone of the platform’s reliability, scalability and performance, working hands on with batch and streaming data pipelines, storage solutions and APIs that serve complex analytical and ML workloads.The role encompasses ownership of the self-serve data platform, including data collection, lake management, orchestration, processing, and distribution.

Job Responsibility:

  • Independently develop and deliver high-quality features for our new Data/ML Platform
  • Refactor and translate our data products and finish various tasks to a high standard
  • Be the cornerstone of the platform’s reliability, scalability and performance
  • Work hands on with batch and streaming data pipelines, storage solutions and APIs that serve complex analytical and ML workloads
  • Encompass ownership of the self-serve data platform, including data collection, lake management, orchestration, processing, and distribution

Requirements:

  • 10+ years of hands-on experience in Software Development/Data Engineering
  • Experience with Databricks (Lakehouse, ML/MosaicAI, Unity Catalog, MLflow, Mosaic AI, model serving etc)
  • Proven experience on building cloud native data intensive applications (both real time and batch based)
  • AWS experience is preferred
  • Strong background in Data Engineering to support other Data Engineers, Back Enders and Data Scientists in building data products and services
  • Hands-on experience of building and maintaining Spark applications
  • Python and PySpark(Scala Spark is a plus)
  • Experienced in AWS Cloud usage and data management (automation, data governance, cost optimisation, delivering reliable & scalable data solutions)
  • Ensure data quality, schema governance and monitoring across pipelines
  • Experience with orchestrators such as Airflow, Databricks workflows
  • Solid experience with containerization and orchestration technologies (e.g., Docker, Kubernetes)
  • Fundamental understanding of various Parquet, Delta Lake and other OTFs file formats
  • Proficiency on an IaC tool such as Terraform or Terragrunt
  • Data validation/analysis skills & proficiency in SQL is considered as a foundational skill
  • Collaborate in a small, fast moving team with high levels of autonomy and impact
  • Strong written and verbal English communication skill and proficient in communicating with non-technical stakeholders

Nice to have:

Prior experience building and operating data platforms is a plus

What we offer:
  • An attractive Base Salary
  • Participation in our Short Term Incentive plan (annual bonus)
  • Work From Anywhere: Enjoy up to 20 days a year of working from anywhere
  • A 24/7 Employee Assistance Program for you and your family
  • A collaborative environment with an opportunity to explore your potential and grow
  • A range of locally relevant benefits

Additional Information:

Job Posted:
February 19, 2026

Employment Type:
Fulltime
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Staff Data Engineer

Staff Data Engineer

Checkr is hiring an experienced Staff Data Engineer to join their Data Platform ...
Location
Location
United States , San Francisco; Denver
Salary
Salary:
166000.00 - 230000.00 USD / Year
https://checkr.com Logo
Checkr
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10+ years of designing, implementing and delivering highly scalable and performant data platform
  • experience building large-scale (100s of Terabytes and Petabytes) data processing pipelines - batch and stream
  • experience with ETL/ELT, stream and batch processing of data at scale
  • expert level proficiency in PySpark, Python, and SQL
  • expertise in data modeling, relational databases, NoSQL (such as MongoDB) data stores
  • experience with big data technologies such as Kafka, Spark, Iceberg, Datalake, and AWS stack (EKS, EMR, Serverless, Glue, Athena, S3, etc.)
  • an understanding of Graph and Vector data stores (preferred)
  • knowledge of security best practices and data privacy concerns
  • strong problem-solving skills and attention to detail
  • experience/knowledge of data processing platforms such as Databricks or Snowflake.
Job Responsibility
Job Responsibility
  • Architect, design, lead and build end-to-end performant, reliable, scalable data platform
  • monitor, investigate, triage, and resolve production issues as they arise for services owned by the team
  • mentor, guide and work with junior engineers to deliver complex and next-generation features
  • partner with engineering, product, design, and other stakeholders in designing and architecting new features
  • create and maintain data pipelines and foundational datasets to support product/business needs
  • experiment with rapid MVPs and encourage validation of customer needs
  • design and build database architectures with massive and complex data
  • develop audits for data quality at scale
  • create scalable dashboards and reports to support business objectives and enable data-driven decision-making
  • troubleshoot and resolve complex issues in production environments.
What we offer
What we offer
  • A fast-paced and collaborative environment
  • learning and development allowance
  • competitive cash and equity compensation and opportunity for advancement
  • 100% medical, dental, and vision coverage
  • up to $25K reimbursement for fertility, adoption, and parental planning services
  • flexible PTO policy
  • monthly wellness stipend
  • home office stipend
  • in-office perks such as lunch four times a week, a commuter stipend, and an abundance of snacks and beverages.
  • Fulltime
Read More
Arrow Right

Staff Data Engineer

We are looking for a Staff Data Engineer to take ownership of Blinq’s data engin...
Location
Location
Australia , Sydney; Melbourne
Salary
Salary:
Not provided
blinq.me Logo
Blinq Technologies
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Extensive experience with SQL, data modeling tools like Dataform, and programming languages such as Python or R and data system architecture designs
  • Hands-on expertise with a modern event data stack and familiarity with tools such as Segment, Amplitude, BigQuery, Looker Studio, Google Analytics, SSGTM, CDP and AWS/GCP
  • A strong vision for building a scalable, reliable, and cutting-edge analytics platform, with the ability to develop a roadmap to achieve this
  • A solid grounding in mathematics, statistics, and data visualisation
  • Experience in MarTech and working with data systems that support machine learning workflows is desirable
  • High proficiency in A/B testing and event system design is also highly desirable
Job Responsibility
Job Responsibility
  • Building and optimising data pipelines which are scalable to support the collection, transformation, and loading (ETL) of data into databases, warehouses or lakes
  • Ensuring data quality and accuracy by implementing robust validation and observability processes
  • Collaborating closely with Product, Marketing, Sales and Engineering teams to align data systems with business goals and measure success effectively
  • Driving cross-functional efforts to address data integrity, reporting, and insights, while remediating inconsistencies and gaps
  • Documenting data processes, architecture, and workflows to ensure clarity and continuity across teams
What we offer
What we offer
  • Equity & ownership
  • Competitive salary & growth path
  • Generous paid time off: At least 20 days fully disconnect each year, with a flexible policy beyond that
  • Parental leave that grows with you: 12 to 26 weeks full pay, based on tenure
  • Free food: Enjoy daily breakfast and lunch at some of our offices, plus an always-stocked snack bar
  • Fulltime
Read More
Arrow Right

Staff Data Engineer

We’re looking for a Staff Data Engineer to own the design, scalability, and reli...
Location
Location
United States , San Jose
Salary
Salary:
150000.00 - 250000.00 USD / Year
figure.ai Logo
Figure
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience owning or architecting large-scale data platforms — ideally in EV, autonomous driving, or robotics fleet environments, where telemetry, sensor data, and system metrics are core to product decisions
  • Deep expertise in data engineering and architecture (data modeling, ETL orchestration, schema design, transformation frameworks)
  • Strong foundation in Python, SQL, and modern data stacks (dbt, Airflow, Kafka, Spark, BigQuery, ClickHouse, or Snowflake)
  • Experience building data quality, validation, and observability systems to detect regressions, schema drift, and missing data
  • Excellent communication skills — able to understand technical needs from domain experts (controls, perception, operations) and translate complex data patterns into clear, actionable insights for engineers and leadership
  • First-principles understanding of electrical and mechanical systems, including motors, actuators, encoders, and control loops
Job Responsibility
Job Responsibility
  • Architect and evolve Figure’s end-to-end platform data pipeline — from robot telemetry ingestion to warehouse transformation and visualization
  • Improve and maintain existing ETL/ELT pipelines for scalability, reliability, and observability
  • Detect and mitigate data regressions, schema drift, and missing data via validation and anomaly-detection frameworks
  • Identify and close gaps in data coverage, ensuring high-fidelity metrics coverage across releases and subsystems
  • Define the tech stack and architecture for the next generation of our data warehouse, transformation framework, and monitoring layer
  • Collaborate with robotics domain experts (controls, perception, Guardian, fall-prevention) to turn raw telemetry into structured metrics that drive engineering/business decisions
  • Partner with fleet management, operators, and leadership to design and communicate fleet-level KPIs, trends, and regressions in clear, actionable ways
  • Enable self-service access to clean, documented datasets for engineers
  • Develop tools and interfaces that make fleet data accessible and explorable for engineers without deep data backgrounds
  • Fulltime
Read More
Arrow Right

Staff Data Engineer

A VC-backed retail AI scale-up is expanding its engineering team and is looking ...
Location
Location
United States
Salary
Salary:
Not provided
weareorbis.com Logo
Orbis Consultants
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years in software development and data engineering with ownership of production-grade systems
  • Proven expertise in Spark (Databricks, EMR, or similar) and scaling it in production
  • Strong knowledge of distributed computing and modern data modeling approaches
  • Solid programming skills in Python, with an emphasis on clean, maintainable code
  • Hands-on experience with SQL and NoSQL databases (e.g., PostgreSQL, DynamoDB, Cassandra)
  • Excellent communicator who can influence and partner across teams
Job Responsibility
Job Responsibility
  • Design and evolve distributed, cloud-based data infrastructure that supports both real-time and batch processing at scale
  • Build high-performance data pipelines that power analytics, AI/ML workloads, and integrations with third-party platforms
  • Champion data reliability, quality, and observability, introducing automation and monitoring across pipelines
  • Collaborate closely with engineering, product, and AI teams to deliver data solutions for business-critical initiatives
What we offer
What we offer
  • Fully remote
  • great equity
Read More
Arrow Right

Staff Data Engineer

A VC-backed retail AI scale-up is expanding its engineering team and is looking ...
Location
Location
United States
Salary
Salary:
Not provided
weareorbis.com Logo
Orbis Consultants
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years in software development and data engineering with ownership of production-grade systems
  • Proven expertise in Spark (Databricks, EMR, or similar) and scaling it in production
  • Strong knowledge of distributed computing and modern data modeling approaches
  • Solid programming skills in Python, with an emphasis on clean, maintainable code
  • Hands-on experience with SQL and NoSQL databases (e.g., PostgreSQL, DynamoDB, Cassandra)
  • Excellent communicator who can influence and partner across teams
Job Responsibility
Job Responsibility
  • Design and evolve distributed, cloud-based data infrastructure that supports both real-time and batch processing at scale
  • Build high-performance data pipelines that power analytics, AI/ML workloads, and integrations with third-party platforms
  • Champion data reliability, quality, and observability, introducing automation and monitoring across pipelines
  • Collaborate closely with engineering, product, and AI teams to deliver data solutions for business-critical initiatives
What we offer
What we offer
  • Fully remote
  • great equity
Read More
Arrow Right

Staff Data Engineer

We are seeking a Staff Data Engineer to architect and lead our entire data infra...
Location
Location
United States , New York; San Francisco
Salary
Salary:
170000.00 - 210000.00 USD / Year
taskrabbit.com Logo
Taskrabbit
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7-10 years of experience in Data Engineering
  • Expertise in building and maintaining ELT data pipelines using modern tools such as dbt, Airflow, and Fivetran
  • Deep experience with cloud data warehouses such as Snowflake, BigQuery, or Redshift
  • Strong data modeling skills (e.g., dimensional modeling, star/snowflake schemas) to support both operational and analytical workloads
  • Proficient in SQL and at least one general-purpose programming language (e.g., Python, Java, or Scala)
  • Experience with streaming data platforms (e.g., Kafka, Kinesis, or equivalent) and real-time data processing patterns
  • Familiarity with infrastructure-as-code tools like Terraform and DevOps practices for managing data platform components
  • Hands-on experience with BI and semantic layer tools such as Looker, Mode, Tableau, or equivalent
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable, reliable data pipelines and infrastructure to support analytics, operations, and product use cases
  • Develop and evolve dbt models, semantic layers, and data marts that enable trustworthy, self-serve analytics across the business
  • Collaborate with non-technical stakeholders to deeply understand their business needs and translate them into well-defined metrics and analytical tools
  • Lead architectural decisions for our data platform, ensuring it is performant, maintainable, and aligned with future growth
  • Build and maintain data orchestration and transformation workflows using tools like Airflow, dbt, and Snowflake (or equivalent)
  • Champion data quality, documentation, and observability to ensure high trust in data across the organization
  • Mentor and guide other engineers and analysts, promoting best practices in both data engineering and analytics engineering disciplines
What we offer
What we offer
  • Employer-paid health insurance
  • 401k match with immediate vesting
  • Generous and flexible time off with 2 company-wide closure weeks
  • Taskrabbit product stipends
  • Wellness + productivity + education stipends
  • IKEA discounts
  • Reproductive health support
  • Fulltime
Read More
Arrow Right

Software Engineer Staff - Data Scientist

Designs, develops, troubleshoots and debugs software programs for software enhan...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Masters or PhD in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
  • Proficiency in Python, R, SQL, or other programming languages for data analysis
  • Experience with data wrangling, manipulation, and visualization tools and libraries such as pandas, numpy, scikit-learn, matplotlib, seaborn, etc.
  • Knowledge of machine learning concepts and techniques such as supervised and unsupervised learning, regression, classification, clustering, dimensionality reduction, etc.
  • Familiarity with cloud computing platforms and services such as AWS, Azure, or Google Cloud
  • Strong analytical and problem-solving skills
Job Responsibility
Job Responsibility
  • Collect, clean, and transform data from various sources and formats for model training
  • Perform exploratory data analysis and visualization to understand patterns and trends
  • Build, test, and deploy predictive models and algorithms using appropriate tools and frameworks to a production cloud environment
  • Communicate findings and recommendations to stakeholders and clients using clear and compelling reports and presentations
  • Collaborate with other data scientists, engineers, and domain experts on cross-functional projects
What we offer
What we offer
  • Comprehensive suite of benefits that supports physical, financial and emotional wellbeing
  • Programs for personal and professional development
  • Inclusive environment that celebrates individual uniqueness
  • Fulltime
Read More
Arrow Right

Staff Software Engineer - Cloud Data Storage

Cloud Data Store (CDS) owns the storage, retrieval, and lifecycle of all workflo...
Location
Location
United States
Salary
Salary:
190000.00 - 265000.00 USD / Year
temporal.io Logo
Temporal
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5 or more years of experience as an 'Arranger' and/or 'Builder/Enhancer' of highly scalable distributed systems
  • Solid computer science fundamentals in distributed systems concepts including multi-threading and concurrency
  • Experience writing concurrent code in production with languages like Go or Java or other applicable languages with skill level as 'high end of Intermediate' and/or 'Advanced' or 'Expert' levels
  • Experience building and running services on AWS
Job Responsibility
Job Responsibility
  • Design & build distributed data systems – craft APIs, schemas, and replication paths that keep petabytes of workflow history durable and query-able. Clearly document design choices and operational knowledge to successfully deploy and run service with those features
  • Drive reliability & performance – own SLOs, create chaos-test plans, profile hot paths, and lead incident reviews
  • Technical leadership – break down roadmap epics, mentor mid-level engineers, steward design docs through RFC
  • Cross-team collaboration – partner with the Server, Cloud, and DX teams to land features end-to-end
What we offer
What we offer
  • Unlimited PTO, 12 Holidays + 2 Floating Holidays
  • 100% Premiums Coverage for Medical, Dental, and Vision
  • AD&D, LT & ST Disability, and Life Insurance (Standard & Supplemental Available)
  • Empower 401K Plan
  • Additional Perks for Learning & Development, Lifestyle Spending, In-Home Office Setup, Professional Memberships, WFH Meals, Internet Stipend and more
  • $3,600 / Year Work from Home Meals
  • $1,500 / Year Career Development & Learning
  • $1,200 / Year Lifestyle Spending Account
  • $1,000 / Year In-Home Office Setup (In addition to Temporal issued equipment)
  • $500 / Year Professional Memberships
  • Fulltime
Read More
Arrow Right