This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As the Data Lakehouse Architect - Data Bricks, this role will be required to take ownership of designing, building, and optimizing enterprise-scale data platforms using Databricks. This role plays a key part in creating a modern data ecosystem that powers advanced analytics, AI/ML, and real-time processing across multi-cloud environments. This role will bring deep expertise in Databricks, Spark, and cloud technologies like AWS and Azure, combined with strong leadership skills to guide engineering teams. Focus will be on delivering secure, scalable, and cost-efficient solutions that align with business goals and drive measurable impact.
Job Responsibility:
Define and execute the technical roadmap for Databricks-based data platforms, ensuring alignment with enterprise architecture and business priorities
Integrate Databricks with AWS/Azure services (e.g., S3, EC2, Glue) and manage DBU consumption through autoscaling and cost optimization strategies
Lead technical architecture of DataBricks Platform and its integration with 3rd party providers in a dual cloud setup
Enable customers to connect and execute robust ETL/ELT pipelines using PySpark and Spark SQL. Optimize cluster performance, job orchestration, and resource utilization for efficiency and cost control
Implement data governance policies, enforce access controls, and ensure compliance with regulatory standards using tools like Unity Catalog
Champion Agile methodologies and ensure disciplined use of Jira for backlog management, sprint planning, and progress tracking
Stay ahead of emerging technologies and Databricks features. Lead proof-of-concepts for new capabilities and drive automation for data workflows
Guide and mentor engineering teams, conduct code reviews, and enforce best practices. Foster collaboration and continuous improvement across cross-functional teams
Excellent stakeholder management and communication skills
Ability to lead large-scale, cross-functional projects in an Agile environment
Requirements:
Bachelor's degree, or equivalent work experience
Six to eight years of relevant technical experience
Five or more years of leading a software engineering team
8+ years in data architecture, platform architecture, or related roles, with 3+ years in Databricks platform leadership
Experience with multi cloud complexities, Azure/AWS preferred
Expertise in PySpark, Spark SQL, and cloud services (AWS/Azure)
Strong understanding of data architecture, governance, and security frameworks
Hands-on experience with CI/CD tools (GitLab, Jenkins) and infrastructure-as-code (Terraform)
Preferred Financial Services experience or similar regulated industry experience
Nice to have:
Familiarity with Unity Catalog, Delta Live Tables, and advanced Databricks features
Preferred Financial Services experience or similar regulated industry experience
What we offer:
Healthcare (medical, dental, vision)
Basic term and optional term life insurance
Short-term and long-term disability
Pregnancy disability and parental leave
401(k) and employer-funded retirement plan
Paid vacation (from two to five weeks depending on salary grade and tenure)
Up to 11 paid holiday opportunities
Adoption assistance
Sick and Safe Leave accruals of one hour for every 30 worked, up to 80 hours per calendar year unless otherwise provided by law