CrawlJobs Logo

Data Platform Engineer - OLAP

adyen.com Logo

Adyen

Location Icon

Location:
Netherlands , Amsterdam

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Sitting at the intersection of Data Engineering, Backend Engineering, and Systems Engineering, Data Platform Engineers at Adyen build the foundational layer of tooling and processes for our on-premise Analytical Data Platforms. These tools support 10s of products, 100s of developers, and 1000s of daily jobs that add to Adyen’s strong portfolio of capabilities. We’re looking for an expert with deep knowledge in distributed systems, to focus on our internal Online Analytical Processing (OLAP) ecosystem. You’ll collaborate with Data and ML Engineers to continuously improve this ecosystem. You’ll also collaborate with other platform engineers to position this ecosystem properly within the larger Data/AI/ML Platform capabilities, powered by Hadoop, Kubernetes, Spark, Trino, Flink, and Ray.

Job Responsibility:

  • Performance at Scale: Develop and maintain high-performance OLAP systems, supporting multi-tenant query workloads and ingestion pipelines with real Big Data scale
  • Reliability: Work with system reliability in mind, ensuring high availability for business-critical analytical products through observability and engineering excellence
  • Productize the Platform: Build self-service tooling that enables Data Engineers and Analysts to independently manage their data assets and diagnose issues
  • Data Quality: Engineer automated frameworks to validate data integrity and leverage metadata-driven tools to enhance data discoverability, lineage, and cataloging across the ecosystem
  • Ecosystem Integration: Architect seamless integrations of the OLAP ecosystem with adjacent distributed systems (e.g. storage, messaging, and batch / stream processing systems)
  • Efficiency & Governance: Monitor and optimize cluster resource-efficiency while making sure the platform adheres to global security and data privacy standards

Requirements:

  • Fluency in Python and/or Java
  • Team player with strong communication skills
  • Ability to work closely with diverse stakeholders you enable (analysts, data scientists, data engineers, etc.) and depend upon (infrastructure, security, etc)
  • Experience in OLAP technologies, like Druid, Clickhouse, Pinot, Doris, Starrocks, etc
  • Experience in CI/CD pipelines, for code and infrastructure automation
  • Experience in Kubernetes
  • Experience in infrastructure and large-scale private cloud systems
  • Additional experience developing and maintaining: Other distributed data and compute systems like Spark, Trino, etc
  • Data modelling for databases
  • Real-time and batch data pipelines (via Kafka, Spark streaming) with an eye for frameworks, and emphasis on user friendliness and quality

Nice to have:

Golang or Rust are also appreciated

Additional Information:

Job Posted:
March 05, 2026

Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Platform Engineer - OLAP

Principal Data Engineer

We are on the lookout for a Principal Data Engineer to help define and lead the ...
Location
Location
United Kingdom
Salary
Salary:
Not provided
dotdigital.com Logo
Dotdigital
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Extensive experience delivering python-based projects in the data engineering space
  • Extensive experience working with SQL and NoSQL database technologies (e.g. SQL Server, MongoDB & Cassandra)
  • Proven experience with modern data warehousing and large-scale data processing tools (e.g. Snowflake, DBT, BiqQuery, Clickhouse)
  • Hands on experience with data orchestration tools like Airflow, Dagster or Prefect
  • Experience using cloud environments (e.g. Azure, AWS, GCP) to process, store and surface large scale data
  • Experience using Kafka or similar event-based architectures e.g. (Pub/Sub via AWS SQS, Azure EventHubs, AWS Kinesis)
  • Strong grasp of data architecture and data modelling principles for both OLAP and OLTP workloads
  • Capable in the wider software development lifecycle in terms of agile ways of working and continuous integration/deployment of data solutions
  • Experience as a lead or Principal Engineer on large-scale data initiative or product builds
  • Demonstrated ability to architect data systems and data structures for high volume, high throughput systems
Job Responsibility
Job Responsibility
  • Lead the design and implementation of scalable, secure and resilient data systems across streaming, batch and real-time use cases
  • Architect data pipelines, model and storage solutions that power analytical and product use cases
  • using primarily Python and SQL via orchestration tooling that run workloads in the cloud
  • Leverage AI to automate both data processing and engineering processes
  • Assure and drive best practices relating to data infrastructure, governance, security and observability
  • Work with technologists across multiple teams to deliver coherent features and data outcomes
  • Support the data team to help adopt data engineering principles
  • Identify, validate and promote new tools and technologies that improve the performance and stability of data services
What we offer
What we offer
  • Parental leave
  • Medical benefits
  • Paid sick leave
  • Dotdigital day
  • Share reward
  • Wellbeing reward
  • Wellbeing Days
  • Loyalty reward
  • Fulltime
Read More
Arrow Right

Sr Data Engineer

(Locals or Nearby resources only). You will work with technologies that include ...
Location
Location
United States , Glendale
Salary
Salary:
Not provided
enormousenterprise.com Logo
Enormous Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of data engineering experience developing large data pipelines
  • Proficiency in at least one major programming language (e.g. Python, Java, Scala)
  • Hands-on production environment experience with distributed processing systems such as Spark
  • Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
  • Experience with at least one major Massively Parallel Processing (MPP) or cloud database technology (Snowflake, Databricks, Big Query)
  • Experience in developing APIs with GraphQL
  • Advance understanding of OLTP vs OLAP environments
  • Candidates must work W2, no Corp 2 Corp
  • US Citizen, Green Card Holder, H4-EAD, TN-Visa
  • Airflow
Job Responsibility
Job Responsibility
  • Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines
  • Build and maintain APIs to expose data to downstream applications
  • Develop real-time streaming data pipelines
  • Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform
  • Contribute to developing and documenting both internal and external standards and best practices for pipeline configurations, naming conventions, and more
  • Ensure high operational efficiency and quality of the Core Data platform datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our stakeholders (Engineering, Data Science, Operations, and Analytics teams)
What we offer
What we offer
  • 3 levels of medical insurance for you and your family
  • Dental insurance for you and your family
  • 401k
  • Overtime
  • Sick leave policy: accrue 1 hour for every 30 hours worked up to 48 hours
Read More
Arrow Right

Data Engineer

This is a data engineer position - a programmer responsible for the design, deve...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5-8 years of experience in working in data eco systems
  • 4-5 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
  • 3+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing 'big data' data pipelines, architectures, and datasets
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
Job Responsibility
Job Responsibility
  • Ensuring high quality software development, with complete documentation and traceability
  • Develop and optimize scalable Spark Java-based data pipelines for processing and analyzing large scale financial data
  • Design and implement distributed computing solutions for risk modeling, pricing and regulatory compliance
  • Ensure efficient data storage and retrieval using Big Data
  • Implement best practices for spark performance tuning including partition, caching and memory management
  • Maintain high code quality through testing, CI/CD pipelines and version control (Git, Jenkins)
  • Work on batch processing frameworks for Market risk analytics
  • Promoting unit/functional testing and code inspection processes
  • Work with business stakeholders and Business Analysts to understand the requirements
  • Work with other data scientists to understand and interpret complex datasets
  • Fulltime
Read More
Arrow Right

Data Engineering Lead

Data Engineering Lead a strategic professional who stays abreast of developments...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
  • 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
  • Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
Job Responsibility
Job Responsibility
  • Strategic Leadership: Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy
  • Team Management: Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement
  • Architecture and Design: Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data
  • Technology Selection and Implementation: Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data
  • Performance Optimization: Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness, ensuring optimal access to global wealth data
  • Collaboration: Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions that support investment strategies and client reporting
  • Data Governance: Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations, particularly around sensitive financial data
  • Fulltime
Read More
Arrow Right

Data Engineering Lead

The Engineering Lead Analyst is a senior level position responsible for leading ...
Location
Location
Singapore , Singapore
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
  • 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
  • Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
Job Responsibility
Job Responsibility
  • Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy
  • Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement
  • Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data
  • Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data
  • Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness
  • Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions
  • Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations
What we offer
What we offer
  • Equal opportunity employer commitment
  • Accessibility and accommodation support
  • Global workforce benefits
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

NorthBay is looking for a Senior Data Engineer with a strong passion for databas...
Location
Location
Pakistan , Lahore; Islamabad; Karachi
Salary
Salary:
250000.00 - 350000.00 PKR / Month
northbaysolutions.com Logo
NorthBay
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Up to 3 years of relevant experience in database deployment, management, or migration
  • Hands-on experience with PostgreSQL, MongoDB, DocumentDB, MySQL, or Oracle
  • Experience using AWS Database Migration Services (AWS DMS) for data migration projects
  • Knowledge of NoSQL databases and DocumentDB is a plus
  • Understanding of ETL/ELT pipelines, data engineering, and data flow orchestration
  • Familiarity with backup/recovery, replication, and clustering concepts
  • Strong analytical, problem-solving, and communication skills
Job Responsibility
Job Responsibility
  • Support and execute database migration initiatives across multiple database platforms and AWS environments
  • Collaborate with technical leads to ensure successful, secure, and validated database migrations
  • Develop and maintain data validation plans, migration scripts, and testing frameworks
  • Work closely with application teams to ensure business continuity, performance, and SLAs are maintained during migration
  • Perform database performance tuning, monitoring, and troubleshooting to ensure reliability and scalability
  • Contribute to database design, replication, clustering, backup/recovery, and CDC (Change Data Capture) strategies
  • Demonstrate strong understanding of OLTP and OLAP workloads and migration best practices
What we offer
What we offer
  • Fuel expense reimbursement
  • Paid holidays and vacation
  • Medical outpatient reimbursement & health insurance
  • Opportunity to work in a highly collaborative, AWS-focused environment on impactful migration projects
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

NorthBay is looking for a Senior Data Engineer with a strong passion for databas...
Location
Location
Pakistan , Lahore; Islamabad; Karachi
Salary
Salary:
250000.00 - 350000.00 PKR / Month
northbaysolutions.com Logo
NorthBay
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Up to 3 years of relevant experience in database deployment, management, or migration
  • Hands-on experience with PostgreSQL, MongoDB, DocumentDB, MySQL, or Oracle
  • Experience using AWS Database Migration Services (AWS DMS) for data migration projects
  • Knowledge of NoSQL databases and DocumentDB is a plus
  • Understanding of ETL/ELT pipelines, data engineering, and data flow orchestration
  • Familiarity with backup/recovery, replication, and clustering concepts
  • Strong analytical, problem-solving, and communication skills
Job Responsibility
Job Responsibility
  • Support and execute database migration initiatives across multiple database platforms and AWS environments
  • Collaborate with technical leads to ensure successful, secure, and validated database migrations
  • Develop and maintain data validation plans, migration scripts, and testing frameworks
  • Work closely with application teams to ensure business continuity, performance, and SLAs are maintained during migration
  • Perform database performance tuning, monitoring, and troubleshooting to ensure reliability and scalability
  • Contribute to database design, replication, clustering, backup/recovery, and CDC (Change Data Capture) strategies
  • Demonstrate strong understanding of OLTP and OLAP workloads and migration best practices
What we offer
What we offer
  • Fuel expense reimbursement
  • Paid holidays and vacation
  • Medical outpatient reimbursement & health insurance
  • Opportunity to work in a highly collaborative, AWS-focused environment on impactful migration projects
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

NorthBay is looking for a Senior Data Engineer with a strong passion for databas...
Location
Location
Pakistan , Lahore; Islamabad; Karachi
Salary
Salary:
250000.00 - 350000.00 PKR / Month
northbaysolutions.com Logo
NorthBay
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Up to 3 years of relevant experience in database deployment, management, or migration
  • Hands-on experience with PostgreSQL, MongoDB, DocumentDB, MySQL, or Oracle
  • Experience using AWS Database Migration Services (AWS DMS) for data migration projects
  • Knowledge of NoSQL databases and DocumentDB is a plus
  • Understanding of ETL/ELT pipelines, data engineering, and data flow orchestration
  • Familiarity with backup/recovery, replication, and clustering concepts
  • Strong analytical, problem-solving, and communication skills
Job Responsibility
Job Responsibility
  • Support and execute database migration initiatives across multiple database platforms and AWS environments
  • Collaborate with technical leads to ensure successful, secure, and validated database migrations
  • Develop and maintain data validation plans, migration scripts, and testing frameworks
  • Work closely with application teams to ensure business continuity, performance, and SLAs are maintained during migration
  • Perform database performance tuning, monitoring, and troubleshooting to ensure reliability and scalability
  • Contribute to database design, replication, clustering, backup/recovery, and CDC (Change Data Capture) strategies
  • Demonstrate strong understanding of OLTP and OLAP workloads and migration best practices
What we offer
What we offer
  • Fuel expense reimbursement
  • Paid holidays and vacation
  • Medical outpatient reimbursement & health insurance
  • Opportunity to work in a highly collaborative, AWS-focused environment on impactful migration projects
  • Fulltime
Read More
Arrow Right