CrawlJobs Logo

Senior Python Developer

https://www.citi.com/ Logo

Citi

Location Icon

Location:
Canada, Mississauga

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We are seeking a highly skilled and experienced Python Developer to join our Data Engineering & Analytics team. You will play a key role in designing, developing, and maintaining robust data pipelines, APIs, and data processing workflows. You will work closely with data analysts and business teams to understand data requirements and deliver insightful data-driven solutions.

Job Responsibility:

  • Design, develop, and maintain robust and scalable data pipelines using Python, SQL, PySpark, and streaming technologies like Kafka
  • Perform efficient data extraction, transformation, and loading (ETL) for large volumes of data from diverse data providers, ensuring data quality and integrity
  • Build and maintain RESTful APIs and microservices to support seamless data access and transformation workflows
  • Develop reusable components, libraries, and frameworks to automate data processing workflows, optimizing for performance and efficiency
  • Apply statistical analysis techniques to uncover trends, patterns, and actionable business insights from data
  • Implement comprehensive data quality checks and perform root cause analysis on data anomalies, ensuring data accuracy and reliability
  • Collaborate effectively with data analysts, business stakeholders, and other engineering teams to understand data requirements and translate them into technical solutions

Requirements:

  • Bachelor's or Master's degree in Computer Science, Data Science, Information Systems, or a related field
  • 5+ years of proven experience in Python development, with a strong focus on data handling, processing, and analysis
  • Extensive experience building and maintaining RESTful APIs and working with microservices architectures
  • Proficiency in building and managing data pipelines using APIs, ETL tools, and Kafka
  • Solid understanding and practical application of statistical analysis methods for business decision-making
  • Hands-on experience with PySpark for large-scale distributed data processing
  • Strong SQL skills for querying, manipulating, and optimizing relational database operations
  • Deep understanding of data cleaning, preprocessing, and validation techniques
  • Knowledge of data governance, security, and compliance standards is highly desirable
  • Strong analytical, debugging, problem-solving, and communication skills
  • Ability to work both independently and collaboratively within a team environment

Nice to have:

  • Experience with CI/CD tools and Git-based version control
  • Experience in the financial or banking domain
  • Familiarity with basic machine learning (ML) concepts and experience preparing data for ML models
What we offer:
  • Equal opportunity employment
  • Accessibility support
  • Inclusive work environment

Additional Information:

Job Posted:
August 14, 2025

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.