This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Data Ops Capability Deployment - Analyst is a seasoned professional role intended to contribute to the development and implementation of improved data governance, data management practices, and advanced data analytics capabilities.
Job Responsibility:
Hands on with data engineering background and have thorough understanding of Distributed Data platforms and Cloud services
Sound understanding of data architecture and data integration with enterprise applications
Research and evaluate new data technologies, data mesh architecture and self-service data platforms
Work closely with Enterprise Architecture Team on the definition and refinement of overall data strategy
Should be able to address performance bottlenecks, design batch orchestrations, and deliver Reporting capabilities
Ability to perform complex data analytics (data cleansing, transformation, joins, aggregation etc.) on large complex datasets
Build analytics dashboards & data science capabilities for Enterprise Data platforms
Communicate complicated findings and propose solutions to a variety of stakeholders
Understanding business and functional requirements provided by business analysts and convert into technical design documents
Work closely with cross-functional teams e.g. Business Analysis, Product Assurance, Platforms and Infrastructure, Business Office, Control and Production Support
Prepare handover documents and manage SIT, UAT and Implementation
Demonstrate an in-depth understanding of how the development function integrates within overall business/technology to achieve objectives
Requires a good understanding of the banking industry.
Requirements:
10+ years of active development background and experience in Financial Services or Finance IT
Experience with Data Quality/Data Tracing/Data Lineage/Metadata Management Tools
Hands on experience for ETL using PySpark on distributed platforms along with data ingestion, Spark optimization, resource utilization, capacity planning & batch orchestration
In depth understanding of Hive, HDFS, Airflow, job scheduler
Strong programming skills in Python with experience in data manipulation and analysis libraries (Pandas, Numpy)
Should be able to write complex SQL/Stored Procs
Should have worked on DevOps, Jenkins/Lightspeed, Git, CoPilot
Strong knowledge in one or more of the BI visualization tools such as Tableau, PowerBI
Proven experience in implementing Datalake/Datawarehouse for enterprise use cases
Exposure to analytical tools and AI/ML is desired.
Nice to have:
Exposure to analytical tools and AI/ML.
What we offer:
Equal opportunity employer
Accessibility accommodations
Global benefits for employees to support well-being and growth.
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.