This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Data Engineer Lead is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Mantas Scenario Development Lead will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team.
Job Responsibility:
Developing and supporting scalable, extensible, and highly available data solutions
Deliver on critical business priorities while ensuring alignment with the wider architectural vision
Identify and help address potential risks in the data supply chain
Follow and contribute to technical standards
Design and develop analytical data models
Collaborating with peers to drive simplification, standardization and incremental improvements
Evaluate existing solutions for strengths and weaknesses (data quality, scalability, latency, security, performance, data integrity, etc.).
Requirements:
First Class Degree in Engineering/Technology (4-year graduate course)
12 + years of experience implementing enterprise data-intensive solutions using agile methodologies
Experience of relational databases and using SQL for data querying, transformation and manipulation
Experience of modelling data for analytical and reporting consumers
Hands on Mantas expert throughout the full development life cycle, including: requirements analysis, functional design, technical design, programming, testing, documentation, implementation, and on-going technical support
Translate business needs (BRD) into effective technical solutions and documents (FRD/TSD)
Ability to automate and streamline the build, test and deployment of data pipelines
Experience in cloud native technologies and patterns
A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training
Excellent communication and problem-solving skills. Demonstrates a high degree of ownership in RCA and continuous improvement
An inclination to mentor
an ability to lead and deliver components independently.
Nice to have:
Ab Initio: Experience developing Co>Op graphs
ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows
Exposure to Quantexa Entity resolution engine and Scoring development
Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs
Exposure to data validation, cleansing, enrichment and data controls
Fair understanding of containerization platforms like Docker, Kubernetes
Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta
Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI
Certification on any one or more of the above topics would be an advantage.
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.