This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Research Engineer supports investors and research teams by building data workflows, research tools, analytics pipelines, and AI-assisted capabilities that help transform investor hypotheses into actionable insights. Embedded within DS&S research pods - asset-class focused sub-teams - this role requires collaboration with investors/researchers, AI Leads, and core engineering team to deliver high-quality data assets, exploratory analyses and research-enabling components.
Job Responsibility:
Build and maintain web harvesting and data pipelines, transformations, and workflows to support research hypotheses and data analyses
Develop new capabilities to source high-quality data and create scalable and maintainable data solutions
Develop research utilities, prototypes, dashboards, or exploratory tools as needed by the pod for insight generation or research workflows
Engage with investors and researchers to understand research workflows and translate requirements into data solutions
Collaborate with core engineering teams to adopt data extraction and management tools and technologies to accelerate investor research
Collaborate with AI leads to source data and build AI applications to supplement investor research with AI-assisted solutions
Requirements:
Strong programming skills in Python and SQL
Strong problem-solving and analytical skills
Knowledge/curiosity about investment research workflows or similar domains
Prior experience in data engineering or data analytics, or data intensive applications
Proven ability to work collaboratively across engineering and business teams
3+ years hands-on experience in Data Engineering/Analytics
B.S. / M.S. college degree in Computer Science, Data Science/Engineering, or related subject area
Nice to have:
Prior experience web harvesting tools and techniques (Scrapy)
Experience with cloud data tools (Snowflake, BigQuery, GCP)
Exposure to AI/ML tooling, LLM experimentation, or AI applications such as RAG, Summarization and Q&A (Open AI, Vertex, Anthropic)
Experience creating dashboards, notebooks, or lightweight research tools (Streamlit, Flask, Tableau, Power BI)
Familiarity with workflow orchestration or automation tools (Kubernetes/Airflow)