This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are looking for a Data Engineer to join our Platform Engineering team on a 12‑month fixed‑term contract. This role bridges data engineering, analytics, and applied data science, with a strong focus on reusability, explainability, and regulatory alignment. You will work closely with the Principal Engineer to design, build, and operationalise a high quality, governed data foundation that enables analytics, advanced modelling, and AI use cases across the organisation.
Job Responsibility:
Contribute to the design and evolution of the enterprise data foundation, including core and presentation data layers
Build well‑defined, reusable data products (datasets, features, semantic models) that can be consumed by analytics, Artificial Intelligence (AI) models, and downstream applications
Partner with data engineers to define data structures that support historical accuracy, auditability, and lineage
Perform exploratory data analysis to identify patterns, data quality issues, and AI opportunities
Support AI use cases by ensuring data is fit‑for‑purpose, well‑documented, and reproducible
Work within the enterprise data governance framework, including Data quality rules and monitoring
Metadata, lineage, and glossary contributions
Collaborate with Data Owners to resolve data issues and clarify definitions
Ensure datasets and models meet regulatory, privacy, and audit requirements relevant to financial services
Collaborate closely with engineering, architecture, compliance, and business teams
Contribute to standards, templates, and best practices for analytics and data science delivery
Support enablement of other teams by creating reusable assets and documentation
Take ownership for the day to day operability of the data integration platform to ensure business functions run smoothly
Requirements:
Strong experience with Python for data analysis and modelling (e.g. pandas, NumPy, scikit‑learn or equivalent)
Solid SQL skills and experience working with cloud data warehouses and lakehouses
Experience working in a modern data platform (e.g. Microsoft Fabric, Synapse, Snowflake, Databricks)
Understanding of data modelling concepts (e.g. dimensional models, Data Vault or similar enterprise patterns)
Proven experience working on data foundations, not just dashboards or isolated models
Experience creating reusable, governed datasets or features intended for multiple downstream consumers
Familiarity with metadata management, data lineage, and data quality frameworks
Experience working in financial services or another regulated industry
Awareness of data privacy, auditability, and model risk considerations
Ability to balance innovation with control, documentation, and traceability
Strong analytical thinking with the ability to explain complex concepts simply
Comfortable working with ambiguity and helping shape the right data approach
Collaborative mindset and ability to work closely with senior engineers, architects, and business stakeholders
Understanding of CI/CD pipeline experience
Nice to have:
Experience with Microsoft Purview or similar governance/cataloguing tools
Exposure to GenAI use cases (LLMs, embeddings, retrieval‑augmented generation)
Experience contributing to an AI or Data Centre of Excellence model
Experience in preparing data assets specifically for AI and GenAI use cases, including feature engineering, embedding, and structured/unstructured data preparation
Degree or qualification in machine learning
Industry certification in the above technologies will be highly desired
Tertiary degree in software engineering, computer science or related principles
Experience supporting AI use cases or early‑stage production models is highly desirable