This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Security Solutions Data Science team is responsible for creating Artificial Intelligence (AI) and Machine Learning (ML) models backing its flagship product. The models generated are production ready and created to back specific products in Mastercard's authentication and authorization networks. The Data Science team is also responsible for developing automated processes for creating models covering all modeling steps, from data extraction up to delivery. In addition, the processes must be designed to scale, to be repeatable, resilient, and industrialized. Services within Mastercard is responsible for acquiring, engaging, and retaining customers by managing fraud and risk, enhancing cybersecurity, and improving the digital payments experience. We provide value-added services and leverage expertise, data-driven insights, and execution. You will join a dynamic and innovative team working at petabyte scale with cutting-edge big data platforms and technologies. Our mission is to streamline and automate the repetitive aspects of data science to accelerate innovation in AI/ML-driven fraud detection. We specialize in building high-quality, scalable ETL pipelines, developing robust feature engineering workflows, and creating shared tools that empower data scientists across the organization. Additionally, we play a key role in deploying model features and ensuring seamless integration into production environments.
Job Responsibility:
Be an integral part of a creative and innovative team, contributing to collaborative projects and sharing insights to drive data engineering excellence
Work with cutting-edge big data platforms and technologies at petabyte scale, pushing the boundaries of data management and analysis
Write clean and testable code, ensuring that all solutions are robust, maintainable, and efficient
Automate and maintain data workflows in a distributed environment, streamlining processes to enhance productivity and reliability
Analyze and optimize ETL processes, ensuring efficient data extraction, transformation, and loading to support business intelligence and analytics
Requirements:
Proven track record of self-directed learning, demonstrating the ability to acquire new skills and knowledge independently
Strong independent research skills and resourcefulness, enabling you to find solutions and innovate in data engineering
Experience with Python and SQL, showcasing the ability to write clean, readable, and maintainable code
Critical thinking and a drive to produce high-quality work, ensuring that all solutions meet rigorous standards
Good communication skills, enabling effective collaboration with team members and stakeholders
Ability and interest in problem-solving, with a proactive approach to tackling challenges
Openness to learn and apply new technologies, staying current with industry trends and advancements
Understanding of Agile methodologies, with the ability to drive iterative delivery and cross-team collaboration
Effective communicator with the ability to understand complex concepts to both technical and non-technical audiences, and to influence stakeholders across product, engineering, and acquisition teams
Bachelor's degree in Computer Science, Data Analytics, Mathematics, Software Engineering, or a related field or equivalent practical experience
Nice to have:
Experience with any big data technology, enhancing your ability to work with diverse data engineering tools and platforms
Experience with spark or databricks
Experience with AWS, GCP, or any other cloud environments, showcasing proficiency in cloud-based data solutions and infrastructure