This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a hands-on Data Platform Architect to lead the design, implementation, and modernization of our enterprise-wide data platform—including data governance, lakehouse architecture, engineering pipelines, analytics, and AI-driven solutions. This role requires deep technical expertise, strategic vision, and executional leadership to build a scalable, governed, and intelligent data ecosystem across cloud and on-prem environments.
Job Responsibility:
Define and continuously evolve the target data architecture across the stack—governance, engineering, modeling, lakehouse, AI/ML
Translate business and technical goals into scalable and resilient platform designs
Own and maintain architectural roadmaps, standards, and decision frameworks
Act as the bridge between architects, Business SME/Analysts, data engineers, and analytics teams to ensure alignment and compliance with platform standards
Design and implement modern ELT/ETL pipelines using tools like Spark, Python, SQL, Scala, and cloud-native components (e.g., Fivetran, Databricks, Snowflake, BigQuery)
Build and maintain Lakehouse platforms using Delta Lake, Iceberg, or equivalent technologies
Manage data ingestion from heterogeneous sources including ERP, CRM, IoT, and third-party APIs
Guide hands-on development of robust, reusable, and automated data flows
Implement and enforce data governance frameworks including data lineage, metadata management, and access controls
Partner with Data Stewards and Governance Analysts to catalog data domains, define entities, and ensure SOX compliance
Drive adoption of tools like Atlan/Unity Catalog for metadata, quality, and stewardship
Develop data models (ERDs, dimensional and 3NF) and define canonical data representations
Lead the integration of AI/ML solutions and agentic architectures into the data platform to automate insights, recommendations, and decision flows
Collaborate with AI/ML teams to embed vectorization, tokenization, and semantic enrichment into the data stack
Prototype innovative GenAI applications for operational and business intelligence
Review solution designs and provide architectural guidance to engineering teams
Mentor technical staff while fostering best practices and continuous improvement
Collaborate with DevOps to embed CI/CD, version control, and environment automation across the data lifecycle
Continuously assess and improve platform reliability, scalability, performance, and cost-efficiency
Requirements:
12+ years of hands-on experience in data architecture, engineering, and analytics delivery
Proven success in building modern data platforms on cloud (AWS, Azure, GCP)
Deep knowledge of data lakehouse architectures (e.g., Databricks, Fabric)
Proficiency with Python, SQL, Spark, and orchestration frameworks
Experience with ETL/ELT tools (e.g., Informatica, Talend, Fivetran) and containerization (Docker, Kubernetes)
Strong background in Data Modeling (ERD, star/snowflake, canonical models)
Familiarity with REST APIs, GraphQL, and event-driven design
Demonstrated experience integrating AI/ML and GenAI components into data platforms
Nice to have:
15+ years of experience including solution architecture roles for large-scale data initiatives
Deep expertise in Databricks, Delta Lake, Unity Catalog and Azure Data Services
Experience with BI/visualization tools like Power BI, Tableau, and Looker
Understanding of Master Data Management, data profiling, cleansing, and enrichment techniques
Exposure to DataOps and DevOps practices for CI/CD and platform automation
Strong analytical and problem-solving skills with the ability to communicate clearly to both technical and business audiences
Knowledge of high-tech business domains such as engineering, sales, finance, supply chain, or semiconductor operations is a strong plus