This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Want to elevate your career by being a part of the world's largest asset manager? Do you thrive in an environment that fosters positive relationships and recognizes stellar service? Are analyzing complex problems and identifying solutions your passion? Look no further. BlackRock is currently seeking a candidate to become part of our Global Investment Operations Data Engineering team. We recognize that strength comes from diversity, and will embrace your rare skills, eagerness, and passion while giving you the opportunity to grow professionally and as an individual. We know you want to feel valued every single day and be recognized for your contribution. At BlackRock we strive to empower our employees and actively engage your involvement in our success. With over USD $9.5 trillion of assets under management, we have an extraordinary responsibility: our technology and services empower millions of investors to save for retirement, pay for college, buy a home and improve their financial well-being. Come join our team and experience what it feels like to be part of an organization that makes a difference.
Job Responsibility:
Design, develop and maintain a Data Analytics Infrastructure
Work with a project manager or drive the project management of team deliverables
Work with subject matter experts and users to understand the business and their requirements. Help determine the optimal dataset and structure to deliver on those user requirements
Work within a standard data / technology deployment workflow to ensure that all deliverables and enhancements are provided in a disciplined, repeatable, and robust manner
Work with team lead to understand and help prioritize the team’s queue of work
Automate periodic (daily/weekly/monthly/Quarterly or other) reporting processes to minimize / eliminate associated developer BAU activities.
Leverage industry standard and internal tooling whenever possible in order to reduce the amount of custom code that requires maintenance
Requirements:
3+ years of experience in writing ETL, data curation and analytical jobs using Hadoop-based distributed computing technologies: Spark / PySpark, Hive, etc.
3+ years of knowledge and Experience of working with large enterprise databases preferably Cloud bases data bases/ data warehouses like Snowflake on Azure or AWS set-up
Knowledge and Experience in working with Data Science / Machine / Gen AI Learning frameworks in Python, Azure/ openAI, meta tec.
Knowledge and Experience building reporting and dashboards using BI Tools: Tableau, MS PowerBI, etc.
Prior Experience working on Source Code version Management tools like GITHub etc.
Prior experience working with and following Agile-based workflow paths and ticket-based development cycles
Prior Experience setting-up infrastructure and working on Big Data analytics
Strong analytical skills with the ability to collect, organize, analyse, and disseminate significant amounts of information with attention to detail and accuracy
Experience working with SMEs / Business Analysts, and working with Stakeholders for sign-off