This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are assembling an A-team of highly skilled, autonomous, and AI-first engineers, and we are seeking an exceptional Full Stack Data Engineer to join our high-performing, co-located squads in Canada. This role is for a hands-on engineer who is passionate about leveraging data, proficient in building end-to-end data solutions, and deeply committed to using AI tools to maximize productivity. The ideal candidate will be instrumental in designing, developing, and optimizing robust data pipelines, from ingestion to consumption, using Python, PySpark, and other big data technologies. We are looking for an AI-first thinker who can profoundly understand the functional domains our work impacts, and significantly contribute to our data strategy and culture.
Job Responsibility:
Operate end-to-end in the design, development, and implementation of full-stack data solutions
Collaborate closely within small, co-located squads
Develop, maintain, and optimize highly efficient and resilient data ingestion, processing, and transformation pipelines
Implement sophisticated data storage solutions leveraging big data technologies
Design and implement scalable data models and schemas
Engage effectively with data consumers, data scientists, and business stakeholders
Implement real-time data streaming and complex event-driven architectures
Adhere to and contribute to best practices in data engineering and software development
Exhibit High Autonomy and Agency
Innovate with AI-Powered Development
Participate in technical discussions and contribute to the evolution of our big data technology stack
Expertly Troubleshoot and Resolve challenging technical issues
Requirements:
4+ years of progressive, hands-on experience as a Data Engineer
Expert-level proficiency in Python
Deep understanding and extensive hands-on experience with the entire Apache Spark ecosystem
Advanced proficiency with Hive
Expert knowledge of distributed computing fundamentals, HDFS, and Hadoop ecosystem
Proficiency in SQL, complex query optimization, and advanced data warehousing concepts
Extensive experience with various data storage formats and leading data lake solutions
Proven experience with enterprise-grade NoSQL databases
Expert-level experience with Apache Kafka
Extensive experience with big data services on major cloud platforms
Demonstrated mastery and innovative application of AI coding tools
A proactive, 'AI-first thinker' mindset
Expert ability to articulate the intricacies of the functional domain
Advanced understanding of software engineering principles
Extensive experience with RESTful API design
Strong expertise in containerization technologies
Master-level proficiency with version control systems
Exceptional problem-solving, analytical, and debugging skills
Superior communication, presentation, and interpersonal skills