This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
This is a genuinely exciting opportunity to join early in the lifecycle of a new data platform – helping design and build ingestion pipelines from the ground up. You’ll join a small, fast-paced team responsible for building a new data product focused on ingesting, normalising and analysing complex datasets from asset management firms. The data is messy, inconsistent and often unstructured (Excel, flat files, PDFs, bespoke formats etc). The opportunity is to architect robust Python based pipelines that can intelligently process and transform this into structured, usable outputs. There’s also a forward looking AI/ML roadmap, particularly around classification and intelligent data extraction.
Job Responsibility:
Designing and building Python-based data ingestion pipelines
Working with ETL/orchestration frameworks (e.g. Dagster, AWS-native tools, Databricks or similar)
Handling complex, unstructured datasets
Deploying within AWS-based cloud infrastructure
Requirements:
3+ years’ experience in data product or data engineering roles
Strong Python with hands-on ETL/pipeline development
Experience working with messy, real-world datasets
Solid understanding of data structures, algorithms and statistical concepts