This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a highly skilled and experienced Python Developer to join our Data Engineering & Analytics team. You will play a key role in designing, developing, and maintaining robust data pipelines, APIs, and data processing workflows. You will work closely with data analysts and business teams to understand data requirements and deliver insightful data-driven solutions.
Job Responsibility:
Design, develop, and maintain robust and scalable data pipelines using Python, SQL, PySpark, and streaming technologies like Kafka
Perform efficient data extraction, transformation, and loading (ETL) for large volumes of data from diverse data providers, ensuring data quality and integrity
Build and maintain RESTful APIs and microservices to support seamless data access and transformation workflows
Develop reusable components, libraries, and frameworks to automate data processing workflows, optimizing for performance and efficiency
Apply statistical analysis techniques to uncover trends, patterns, and actionable business insights from data
Implement comprehensive data quality checks and perform root cause analysis on data anomalies, ensuring data accuracy and reliability
Collaborate effectively with data analysts, business stakeholders, and other engineering teams to understand data requirements and translate them into technical solutions
Requirements:
Bachelor's or Master's degree in Computer Science, Data Science, Information Systems, or a related field
5+ years of proven experience in Python development, with a strong focus on data handling, processing, and analysis
Extensive experience building and maintaining RESTful APIs and working with microservices architectures
Proficiency in building and managing data pipelines using APIs, ETL tools, and Kafka
Solid understanding and practical application of statistical analysis methods for business decision-making
Hands-on experience with PySpark for large-scale distributed data processing
Strong SQL skills for querying, manipulating, and optimizing relational database operations
Deep understanding of data cleaning, preprocessing, and validation techniques
Knowledge of data governance, security, and compliance standards is highly desirable
Strong analytical, debugging, problem-solving, and communication skills
Ability to work both independently and collaboratively within a team environment
Nice to have:
Experience with CI/CD tools and Git-based version control
Experience in the financial or banking domain
Familiarity with basic machine learning (ML) concepts and experience preparing data for ML models
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.