This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a talented and experienced Data Analytics Developer to join our team. You will play a key role in designing, developing, and maintaining our next-generation data analytics platform. This involves building scalable and efficient data pipelines, optimizing query performance, and implementing robust solutions to enable real-time and historical data analysis for business stakeholders.
Job Responsibility:
Design, develop, and maintain data ingestion and processing pipelines to move large volumes of data from various sources into the data platform
utilize Apache Druid to build high-performance, real-time analytics solutions, focusing on data ingestion, indexing, and query optimization
leverage Starburst (Trino) to implement a data mesh architecture and enable query federation across different data sources, including data warehouses, data lakes, and other systems
develop and optimize analytical queries and reporting views using SQL within Impala for high-performance interactive and batch queries on large datasets
implement and manage datasets using Apache Iceberg, ensuring features like schema evolution, partitioning, and time-travel are leveraged for robust and reliable data lake operations
apply OLAP (Online Analytical Processing) techniques and tooling to design and develop multidimensional data models (cubes) for fast and efficient analysis
collaborate with data architects and business intelligence teams to create and maintain optimal data models and data warehouse structures
troubleshoot and resolve performance issues, data processing bottlenecks, and data quality problems within the data analytics ecosystem
maintain comprehensive documentation for all data processes, architectures, and data flows.
Requirements:
5-8 years of relevant experience
proven hands-on experience in a data engineering or analytics developer role
strong expertise with SQL and experience with various data analytical tools
deep experience with Apache Druid, including cluster management, data ingestion, and query optimization
hands-on experience with Starburst (Trino), including query federation and performance tuning across data sources
solid understanding and experience with Apache Impala for fast, interactive SQL queries on Hadoop or cloud data storage
strong expertise with Apache Iceberg, including implementation for data lake architectures, schema evolution, and partitioning
experience with OLAP cube development, including knowledge of multidimensional expressions (MDX) and best practices
proficiency in one or more of Java, Python or PySpark for data processing and pipeline development is desirable
experience with cloud platforms such as AWS, GCP, or Azure is a plus
excellent problem-solving, communication, and collaboration skills.
Nice to have:
proficiency in one or more of Java, Python or PySpark for data processing and pipeline development
experience with cloud platforms such as AWS, GCP, or Azure.
What we offer:
equal opportunity employer
reasonable accommodations for persons with disabilities.
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.