This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Architect and deliver modern data platform solutions with a strong emphasis on Databricks and contemporary cloud data technologies. Build secure, scalable, and high‑performing data environments that enable analytics, reporting, and enterprise‑wide data initiatives.
Job Responsibility:
Architect and deliver modern data platform solutions with a strong emphasis on Databricks and contemporary cloud data technologies
Build secure, scalable, and high‑performing data environments that enable analytics, reporting, and enterprise‑wide data initiatives
Oversee and execute migrations from legacy relational databases into Databricks-based ecosystems
Design and structure scalable data pipelines and foundational data infrastructure aligned with organizational goals
Create and maintain ETL/ELT processes within Databricks to ensure efficient ingestion, transformation, and delivery of data
Continuously refine and optimize data workflows to improve performance, stability, and data quality across all processes
Manage end-to-end data transitions to ensure operational continuity with minimal business disruption
Monitor Databricks workloads and optimize performance, scalability, and cost efficiency across compute and storage layers
Partner with data engineers, scientists, analysts, and product stakeholders to gather requirements and build fit‑for‑purpose data solutions
Establish and enforce data engineering best practices, development standards, and architectural guidelines
Assess emerging tools and technologies to enhance pipeline efficiency, reliability, and automation capabilities
Provide technical direction, guidance, and mentorship to junior engineers and team members
Collaborate closely with DevOps and infrastructure teams to deploy, manage, and support data systems in production
Ensure all data solutions meet compliance standards, organizational security policies, and regulatory obligations
Work with enterprise architects and IT leadership to align data architecture with broader technology strategies and long-term roadmaps
Requirements:
Bachelor’s or Master’s degree in Computer Science or equivalent hands‑on experience
Deep, hands-on expertise working with Databricks for data engineering, ETL development, and migration initiatives
Experience operating within major cloud ecosystems such as AWS, Azure, or Google Cloud
Strong foundation in modern big‑data tools, distributed processing frameworks, and large‑scale data technologies
Solid understanding of data warehousing principles, dimensional modeling, and advanced SQL development
Background working with traditional relational database systems and migrating data from on‑premise RDBMS to cloud-native platforms
Proficiency in core programming and scripting languages, especially Python and SQL
Strong grasp of data governance concepts, data quality frameworks, and enterprise‑grade security practices
Extensive experience with relational databases (e.g., SQL Server) and expertise in database design, schema modeling, and performance considerations
Working knowledge of container technologies like Docker and orchestration platforms such as Kubernetes
Exceptional analytical and problem‑solving capabilities, paired with strong communication skills and the ability to collaborate across cross‑functional teams