This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Imagine being at the forefront of protecting financial integrity and enabling critical data-driven decisions within a leading financial services institution. Our client is dedicated to innovation and safeguarding their operations through robust data solutions. They are seeking a talented and driven individual to join their team and significantly impact their fraud data analytics and reporting capabilities. This is an exceptional opportunity to leverage cutting-edge cloud technologies and contribute directly to the security and stability of financial systems. As a key member of the team, you will be instrumental in designing, building, and maintaining scalable data pipelines on a leading cloud platform. Your expertise will directly support the development of effective data solutions crucial for fraud data analytics and reporting. You will collaborate with cross-functional teams, ensuring data integrity, reliability, and scalability, and your work will empower data-driven insights that protect customers and assets.
Job Responsibility:
Design, build, and maintain robust and scalable data pipelines using cloud platform tools such as BigQuery, Cloud Storage, Dataflow (Apache Beam), Cloud Composer (Airflow), and Pub/Sub
Develop high-performance, production-grade Python and SQL code, optimizing queries for efficient data extraction, transformation, and loading (ETL) processes
Implement complex data models in BigQuery, leveraging partitioning, clustering, and materialized views to achieve optimal performance
Collaborate closely with cross-functional teams, including business customers and Subject Matter Experts, to gather data requirements and deliver impactful solutions
Implement and uphold best practices for data quality, data governance, and data security
Proactively monitor and troubleshoot data pipeline issues, ensuring high availability and performance of critical data flows
Contribute to strategic data architecture decisions, providing recommendations for continuous improvement of data pipelines
Stay current with emerging trends and technologies in cloud-based data engineering and cyber security to drive innovation
Lead investigation and resolution efforts for identified data issues, taking ownership to resolve them in a timely manner
Document processes and procedures thoroughly for producing accurate metrics and ensuring operational clarity
Requirements:
Bachelor’s or Master’s degree in Computer Science, Information Systems, Engineering, or a related field
8+ years of hands-on experience in data management, including gathering data from diverse sources, consolidating it into centralized locations, and transforming it with business logic for consumption in visualization and data analysis
Strong expertise in BigQuery, Cloud Storage, Dataflow, Pub/Sub, Cloud Composer, and related cloud platform services
Proficiency in Python and SQL for data processing and automation
Extensive experience with ETL processes and data pipeline design
Excellent problem-solving skills and meticulous attention to detail
Strong communication and collaboration skills, with the ability to actively listen, dialogue freely, and verbalize ideas effectively
Ability to thrive in an Agile work environment, delivering incremental value to customers by effectively managing and prioritizing tasks
Nice to have:
Deep expertise in real-time processing using Kafka or Pub/Sub
Experience with Power BI development and visualization
Familiarity with modern data stacks such as Snowflake or Databricks (while our focus is on the primary cloud platform)
Knowledge of DevOps practices and tools like Terraform
Familiarity with data visualization tools such as Tableau, Grafana, and/or Looker
Google Professional Data Engineer certification
Demonstrated domain knowledge in Fraud and Financial Crime