CrawlJobs Logo

Data Platform Engineer - OLAP

adyen.com Logo

Adyen

Location Icon

Location:
Netherlands , Amsterdam

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Sitting at the intersection of Data Engineering, Backend Engineering, and Systems Engineering, Data Platform Engineers at Adyen build the foundational layer of tooling and processes for our on-premise Analytical Data Platforms. These tools support 10s of products, 100s of developers, and 1000s of daily jobs that add to Adyen’s strong portfolio of capabilities. We’re looking for an expert with deep knowledge in distributed systems, to focus on our internal Online Analytical Processing (OLAP) ecosystem. You’ll collaborate with Data and ML Engineers to continuously improve this ecosystem. You’ll also collaborate with other platform engineers to position this ecosystem properly within the larger Data/AI/ML Platform capabilities, powered by Hadoop, Kubernetes, Spark, Trino, Flink, and Ray.

Job Responsibility:

  • Performance at Scale: Develop and maintain high-performance OLAP systems, supporting multi-tenant query workloads and ingestion pipelines with real Big Data scale
  • Reliability: Work with system reliability in mind, ensuring high availability for business-critical analytical products through observability and engineering excellence
  • Productize the Platform: Build self-service tooling that enables Data Engineers and Analysts to independently manage their data assets and diagnose issues
  • Data Quality: Engineer automated frameworks to validate data integrity and leverage metadata-driven tools to enhance data discoverability, lineage, and cataloging across the ecosystem
  • Ecosystem Integration: Architect seamless integrations of the OLAP ecosystem with adjacent distributed systems (e.g. storage, messaging, and batch / stream processing systems)
  • Efficiency & Governance: Monitor and optimize cluster resource-efficiency while making sure the platform adheres to global security and data privacy standards

Requirements:

  • Fluency in Python and/or Java
  • Team player with strong communication skills
  • Ability to work closely with diverse stakeholders you enable (analysts, data scientists, data engineers, etc.) and depend upon (infrastructure, security, etc)
  • Experience in OLAP technologies, like Druid, Clickhouse, Pinot, Doris, Starrocks, etc
  • Experience in CI/CD pipelines, for code and infrastructure automation
  • Experience in Kubernetes
  • Experience in infrastructure and large-scale private cloud systems
  • Additional experience developing and maintaining: Other distributed data and compute systems like Spark, Trino, etc
  • Data modelling for databases
  • Real-time and batch data pipelines (via Kafka, Spark streaming) with an eye for frameworks, and emphasis on user friendliness and quality

Nice to have:

Golang or Rust are also appreciated

Additional Information:

Job Posted:
March 05, 2026

Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Platform Engineer - OLAP

Principal Data Engineer

We are on the lookout for a Principal Data Engineer to help define and lead the ...
Location
Location
United Kingdom
Salary
Salary:
Not provided
dotdigital.com Logo
Dotdigital
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Extensive experience delivering python-based projects in the data engineering space
  • Extensive experience working with SQL and NoSQL database technologies (e.g. SQL Server, MongoDB & Cassandra)
  • Proven experience with modern data warehousing and large-scale data processing tools (e.g. Snowflake, DBT, BiqQuery, Clickhouse)
  • Hands on experience with data orchestration tools like Airflow, Dagster or Prefect
  • Experience using cloud environments (e.g. Azure, AWS, GCP) to process, store and surface large scale data
  • Experience using Kafka or similar event-based architectures e.g. (Pub/Sub via AWS SQS, Azure EventHubs, AWS Kinesis)
  • Strong grasp of data architecture and data modelling principles for both OLAP and OLTP workloads
  • Capable in the wider software development lifecycle in terms of agile ways of working and continuous integration/deployment of data solutions
  • Experience as a lead or Principal Engineer on large-scale data initiative or product builds
  • Demonstrated ability to architect data systems and data structures for high volume, high throughput systems
Job Responsibility
Job Responsibility
  • Lead the design and implementation of scalable, secure and resilient data systems across streaming, batch and real-time use cases
  • Architect data pipelines, model and storage solutions that power analytical and product use cases
  • using primarily Python and SQL via orchestration tooling that run workloads in the cloud
  • Leverage AI to automate both data processing and engineering processes
  • Assure and drive best practices relating to data infrastructure, governance, security and observability
  • Work with technologists across multiple teams to deliver coherent features and data outcomes
  • Support the data team to help adopt data engineering principles
  • Identify, validate and promote new tools and technologies that improve the performance and stability of data services
What we offer
What we offer
  • Parental leave
  • Medical benefits
  • Paid sick leave
  • Dotdigital day
  • Share reward
  • Wellbeing reward
  • Wellbeing Days
  • Loyalty reward
  • Fulltime
Read More
Arrow Right

Sr Data Engineer

(Locals or Nearby resources only). You will work with technologies that include ...
Location
Location
United States , Glendale
Salary
Salary:
Not provided
enormousenterprise.com Logo
Enormous Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of data engineering experience developing large data pipelines
  • Proficiency in at least one major programming language (e.g. Python, Java, Scala)
  • Hands-on production environment experience with distributed processing systems such as Spark
  • Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
  • Experience with at least one major Massively Parallel Processing (MPP) or cloud database technology (Snowflake, Databricks, Big Query)
  • Experience in developing APIs with GraphQL
  • Advance understanding of OLTP vs OLAP environments
  • Candidates must work W2, no Corp 2 Corp
  • US Citizen, Green Card Holder, H4-EAD, TN-Visa
  • Airflow
Job Responsibility
Job Responsibility
  • Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines
  • Build and maintain APIs to expose data to downstream applications
  • Develop real-time streaming data pipelines
  • Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform
  • Contribute to developing and documenting both internal and external standards and best practices for pipeline configurations, naming conventions, and more
  • Ensure high operational efficiency and quality of the Core Data platform datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our stakeholders (Engineering, Data Science, Operations, and Analytics teams)
What we offer
What we offer
  • 3 levels of medical insurance for you and your family
  • Dental insurance for you and your family
  • 401k
  • Overtime
  • Sick leave policy: accrue 1 hour for every 30 hours worked up to 48 hours
Read More
Arrow Right

Data Engineer

This is a data engineer position - a programmer responsible for the design, deve...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5-8 years of experience in working in data eco systems
  • 4-5 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
  • 3+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing 'big data' data pipelines, architectures, and datasets
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
Job Responsibility
Job Responsibility
  • Ensuring high quality software development, with complete documentation and traceability
  • Develop and optimize scalable Spark Java-based data pipelines for processing and analyzing large scale financial data
  • Design and implement distributed computing solutions for risk modeling, pricing and regulatory compliance
  • Ensure efficient data storage and retrieval using Big Data
  • Implement best practices for spark performance tuning including partition, caching and memory management
  • Maintain high code quality through testing, CI/CD pipelines and version control (Git, Jenkins)
  • Work on batch processing frameworks for Market risk analytics
  • Promoting unit/functional testing and code inspection processes
  • Work with business stakeholders and Business Analysts to understand the requirements
  • Work with other data scientists to understand and interpret complex datasets
  • Fulltime
Read More
Arrow Right

Data Engineering Lead

Data Engineering Lead a strategic professional who stays abreast of developments...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
  • 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
  • Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
Job Responsibility
Job Responsibility
  • Strategic Leadership: Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy
  • Team Management: Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement
  • Architecture and Design: Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data
  • Technology Selection and Implementation: Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data
  • Performance Optimization: Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness, ensuring optimal access to global wealth data
  • Collaboration: Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions that support investment strategies and client reporting
  • Data Governance: Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations, particularly around sensitive financial data
  • Fulltime
Read More
Arrow Right

Data Engineering Lead

The Engineering Lead Analyst is a senior level position responsible for leading ...
Location
Location
Singapore , Singapore
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
  • 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
  • Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
Job Responsibility
Job Responsibility
  • Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy
  • Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement
  • Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data
  • Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data
  • Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness
  • Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions
  • Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations
What we offer
What we offer
  • Equal opportunity employer commitment
  • Accessibility and accommodation support
  • Global workforce benefits
  • Fulltime
Read More
Arrow Right

Platform Software Engineer

We reshaped bookkeeping to fit the e-comm needs: with Finaloop, customers get fl...
Location
Location
Israel , Tel Aviv
Salary
Salary:
Not provided
finaloop.com Logo
Finaloop
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in server-side development and distributed systems
  • Excellent knowledge of software and application design and architecture
  • Experience with different backend architectures and approaches
  • Experience in working with different types of data storage technologies and approaches (e.g. OLAP, OLTP, relational, document, etc.)
  • Proven track record of working efficiently in a fast-paced intensive startup environment
  • Strong communication and collaboration skills
Job Responsibility
Job Responsibility
  • The most fundamental components of our system, whether it is the cloud infrastructure or the different common software solutions used by developers
  • Developing system-wide solutions to be used by the product development teams
  • Developing scalable long-term solutions
  • Detecting opportunities to improve R&D efficiency and the reliability of our systems by improving and expanding the common infrastructure of our product
  • Researching new technologies and solutions with the potential to be incorporated into our product
Read More
Arrow Right

Sr. Staff Software Engineer - Advanced Analytics Platform

At DISQO, we’re redefining how companies turn data into decisions. Our mission i...
Location
Location
United States , Los Angeles, Glendale
Salary
Salary:
200000.00 - 240000.00 USD / Year
disqo.com Logo
DISQO
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 12+ years of professional software engineering experience
  • 5+ years architecting or building high-performance data systems or analytics platforms
  • 3+ years of product Rust experience
  • Deep expertise in Rust and strong experience in Java
  • Proven track record building large-scale data analytics or OLAP systems from the ground up
  • Deep understanding of columnar data engines, vectorized execution, and query/dataframe optimization
  • Hands-on experience with performance engineering, profiling, and hardware-aware optimization
  • Strong expertise with AWS - designing, deploying, and optimizing large-scale data and compute systems in the cloud
  • A systems-thinking mindset
  • Thrives in a fast-moving, startup environment
Job Responsibility
Job Responsibility
  • Architect and deliver a high-performance Advanced Analytics Engine
  • Design and build an Agentic AI system that leverages this Advanced Analytics Engine
  • Partner with product, engineering and data teams to power agentic AI analytics systems
  • Profile, benchmark, and optimize Rust components
  • Leverage AWS cloud services to architect scalable, reliable, and cost-efficient analytics infrastructure
  • Shape the evolution of DISQO’s broader data platform and its integration across our product ecosystem
  • Mentor and guide engineers
  • Contribute to open-source or internal frameworks that advance analytical systems and distributed computation
What we offer
What we offer
  • 100% covered Medical/Dental/Vision for employee
  • Equity
  • 401K
  • Generous PTO policy
  • Flexible workplace policy
  • Team offsites, social events & happy hours
  • Life Insurance
  • Health FSA
  • Commuter FSA (for hybrid employees)
  • Catered lunch and fully stocked kitchen
  • Fulltime
Read More
Arrow Right

Solutions Architect

Lead the design and implementation of scalable, secure, and high-performing data...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
arrow.com Logo
Arrow Electronics
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 10 years of experience in enterprise data architecture, including design and implementation of large-scale data platforms
  • 5+ years of relevant data engineering experience on Databricks
  • Strong expertise in Databricks (Workspace, Clusters, Jobs, Repos, Delta Live Tables)
  • Deep hands-on expertise with Azure Databricks, PySpark, Delta Lake, Unity catalog, MLflow, DBT, and associated Azure data services (Data Lake, SQL, Synapse, ADF)
  • Proven experience migrating from legacy data warehouse and reporting systems to modern cloud platforms
  • Experience in data modeling, data warehousing (OLTP, OLAP), security, governance, DevOps, and MLOps
  • Hands-on with CI/CD, version control (Git), and DevOps practices for data engineering
  • Excellent communication, collaboration, problem-solving and analytical skills with the ability to collaborate effectively with cross-functional teams and influence decision-making at all levels of the organization
  • Architect-level certifications in Databricks and DBT are preferred
  • Ability to mentor and coach junior architects, engineers, and BI/reporting developers
Job Responsibility
Job Responsibility
  • Lead the design and implementation of scalable, secure, and high-performing data solutions using Azure Databricks, Delta Lake, and Delta Live Tables
  • Own the complete architecture process, including requirements gathering, solution design, documentation, technical reviews and implement advanced data solutions on the Databricks platform
  • Define best practices for cloud data architecture, data modeling, ELT/ETL pipelines, workspace setup, cluster management, repos, and job orchestration
  • Coordinate and communicate with onshore and offshore teams, including end-users, data engineers, reporting specialists, and business analysts
  • Ensure solution compliance with data privacy, security, and governance standards
  • Conduct performance tuning and optimization of Databricks clusters
  • Ensure data quality, lineage, and observability across all pipelines
  • Monitor and troubleshoot data pipelines to ensure data quality and reliability
  • Lead the integration of Databricks with other data platforms and tools
  • Create processes and workflows to support data solutions documents, and lead solution reviews and audits for quality
  • Fulltime
Read More
Arrow Right