This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Senior Data Engineer will design, build, and maintain robust data pipelines and platforms to support advanced analytics and business intelligence initiatives. You will work closely with data architects, analysts, and other engineers to ensure the delivery of scalable, secure, and high-performance data solutions. This is a hands-on technical role requiring expertise in data engineering, cloud technologies, and a strong understanding of modern data architectures.
Job Responsibility:
Design, implement, and optimize ETL/ELT pipelines to transform raw data into usable formats
Develop and maintain data lakehouse using modern cloud platforms such as Microsoft Fabric, Azure Synapse or Databricks
Ensure data pipelines are scalable, efficient, and fault-tolerant
Collaborate with data architects to design data models and schemas for analytics and reporting
Implement data governance, security, and compliance practices
Integrate diverse data sources (structured, semi-structured, and unstructured) into unified data platforms
Monitor and improve the performance of data pipelines and database systems
Optimize storage, compute, and data querying to enhance cost-efficiency on cloud platforms
Address bottlenecks and implement proactive maintenance strategies
Work closely with data analysts, BI developers, and data scientists to understand business requirements
Mentor junior data engineers and contribute to building a high-performing engineering team
Provide technical guidance and best practices for data engineering processes
Stay up-to-date with the latest trends in data engineering, analytics, and cloud technologies
Propose and implement new tools and frameworks to improve workflows and scalability
Participate in code reviews, knowledge sharing, and team-building initiatives
Requirements:
Bachelor’s degree in Computer Science, Information Systems, or a related field
5+ years of experience in data engineering or a related role
Strong expertise in SQL, Python, and/or Scala for data processing
Hands-on experience with Microsoft Fabric, Azure Data Factory, Azure Synapse, or other cloud-based ETL/ELT tools
Proficiency in big data technologies (e.g., Spark, Hadoop) and relational databases
Knowledge of data modeling, schema design, and query optimization
Familiarity with data security and governance frameworks
Nice to have:
Certifications in Microsoft Azure or other cloud platforms
Experience with real-time data processing (e.g., Kafka, Event Hubs)
Knowledge of DevOps practices, including CI/CD pipelines
Exposure to tools like Databricks, Power BI, or Snowflake
What we offer:
Competitive salary and performance-based bonuses
Opportunities for professional development, certifications, and career growth
A collaborative and dynamic work environment focused on innovation
Flexible working arrangements
Exciting projects with high-profile clients across various industries