This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Engineering Lead Analyst is a senior level position responsible for leading a variety of engineering activities including the design, acquisition and deployment of hardware, software and network infrastructure in coordination with the Technology team. The overall objective of this role is to lead efforts to ensure quality standards are being met within existing and planned frameworks.
Job Responsibility:
Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy
Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement
Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data
Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data
Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness
Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions
Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations
Requirements:
10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
Experienced in working with large and multiple datasets and data warehouses
Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets
Strong analytic skills and experience working with unstructured datasets
Ability to effectively use complex analytical, interpretive, and problem-solving techniques
Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
Experience with external cloud platform such as OpenShift, AWS & GCP
Experience with container technologies (Docker, Pivotal Cloud Foundry) and supporting frameworks (Kubernetes, OpenShift, Mesos)
Experienced in integrating search solution with middleware & distributed messaging - Kafka
Highly effective interpersonal and communication skills with tech/non-tech stakeholders
Experienced in software development life cycle and good problem-solving skills
Excellent problem-solving skills and strong mathematical and analytical mindset
Ability to work in a fast-paced financial environment
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.