This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Applications Development Technology Lead Analyst is a senior level position responsible for establishing and implementing new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to lead applications systems analysis and programming activities.
Job Responsibility:
Design, develop, and optimize large-scale data processing jobs using Apache Spark (Java)
Build and maintain robust, scalable, and efficient ETL/ELT pipelines for data ingestion, transformation, and loading from various sources into data lakes and data warehouses
Implement data governance, data quality, and data security standards within the data pipelines
Collaborate with data scientists, analysts, and other engineers to understand data requirements and deliver appropriate data solutions
Monitor, troubleshoot, and improve the performance of existing data pipelines and Spark applications
Develop and maintain documentation for data pipelines, data models, and data processing logic
Evaluate and implement new big data technologies and tools to enhance our data platform capabilities
Participate in code reviews, design discussions, and contribute to the overall architectural vision of the data platform
Ensure data solutions adhere to best practices for scalability, reliability, and cost-effectiveness
Requirements:
Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field
10+ years of experience in data engineering, with a strong focus on Apache Spark development
Proficiency in at least one programming language used with Spark (Java)
Solid understanding of distributed computing principles and big data technologies (Hadoop, HDFS, YARN, Hive, Kafka)
Strong SQL skills and experience with relational and NoSQL databases
Experience with data warehousing concepts and data modelling (star schema, snowflake schema)
Familiarity with version control systems (Git) and CI/CD pipelines
Excellent problem-solving, analytical, and communication skills
Nice to have:
Experience with stream processing technologies like Apache Kafka, Spark Streaming
Knowledge of orchestration tools such as Apache Airflow, Azure Data Factory, or AWS Step Functions
Familiarity with data visualization tools (e.g., Tableau, Power BI) and reporting
Experience with containerization (Docker, Kubernetes)
Certification in Apache Spark or relevant cloud data engineering platforms