This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Data Ops Capability Deployment - Analyst is a seasoned professional role focusing on data engineering, data analytics, and data governance. The role requires in-depth understanding of distributed data platforms, cloud services, and industry-specific skills to design and improve overall data strategies. Collaboration across functions, technical development, and risk management are core responsibilities.
Job Responsibility:
Hands on with data engineering background and understanding of distributed data platforms and cloud services
research and evaluate new data technologies, data mesh architecture and self-service data platforms
work closely with enterprise architecture team on the definition and refinement of overall data strategy
address performance bottlenecks, design batch orchestrations, and deliver reporting capabilities
perform complex data analytics on large, complex datasets
build analytics dashboards & data science capabilities for enterprise data platforms
communicate complicated findings and propose solutions to a variety of stakeholders
understanding business and functional requirements and convert into technical design documents
work closely with cross-functional teams
prepare handover documents and manage SIT, UAT, and implementation
demonstrate an understanding of how the development function integrates within overall business/technology to achieve objectives.
Requirements:
10+ years of active development background and experience in Financial Services or Finance IT is required
experience with Data Quality/Data Tracing/Data Lineage/Metadata Management Tools
hands on experience for ETL using PySpark on distributed platforms along with data ingestion, Spark optimization, resource utilization, capacity planning & batch orchestration
in-depth understanding of Hive, HDFS, Airflow, job scheduler
strong programming skills in Python with experience in data manipulation and analysis libraries (Pandas, Numpy)
should be able to write complex SQL/Stored Procs
should have worked on DevOps, Jenkins/Lightspeed, Git, CoPilot
strong knowledge in one or more of the BI visualization tools such as Tableau, PowerBI
proven experience in implementing Datalake/Datawarehouse for enterprise use cases
exposure to analytical tools and AI/ML is desired.
Nice to have:
exposure to analytical tools and AI/ML
experience in additional BI tools such as Tableau, PowerBI
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.