This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Data Engineer reports to the Director of Data and Analytics and plays a critical role in driving the company’s data strategy by building and maintaining the data infrastructure necessary for advanced analytics and business intelligence. You will ensure the seamless integration, management, and security of data across the organization. Your work will empower stakeholders with the insights they need to make data-driven decisions that impact business operations and overall strategy.
Job Responsibility:
Partner with business unit leaders to understand all data needs and requirements, ensuring alignment with business objectives
Design, develop, and maintain reliable data pipelines that efficiently process large volumes of data according to evolving business needs
Implement systems and practices to ensure data is accessible and usable for business intelligence tools, data analytics teams, and other stakeholders
Manage the loading and transformation of data through both technical processes and business logic
Produce strategic data that adds value and contributes to the organization’s growth and competitiveness
Establish and enforce data quality standards, methodologies, and systems to ensure data accuracy and reliability
Monitor data ingestion and processing, resolving any discrepancies and ensuring smooth data flows
Collaborate with data source providers, Psycho Bunny vendors and internal stakeholders to address data quality issues effectively
Catalog and document the data sources needed to implement self-service analytics across the organization
Process Improvement: Continually improve ongoing reporting and analysis processes and practices to enhance data quality and efficiency
Establish and adhere to data governance policies and standards
Ensure all data management practices comply with industry and government regulations and company policies
Maintain comprehensive documentation of data processes, ensuring transparency and accessibility for stakeholders
Design and implement scalable database architectures using Snowflake, tailored to meet the company’s growing data needs
Lead the design and implementation of scalable, cloud-based data pipelines using Snowflake
Develop robust data models for managing retail datasets such as inventory, sales, customer behavior, and supply chain
Optimize Snowflake configurations for performance and cost-efficiency
Build, monitor, and maintain ETL/ELT pipelines to process large volumes of retail data from multiple sources (e.g., POS systems, e-commerce platforms, CRM, and ERP systems)
Leverage tools like dbt, Apache Airflow, Astronomer for orchestration and transformation
Develop and maintain data models that support efficient querying and reporting across various business domains
Optimize database performance through indexing, partitioning, and other database management techniques
Tune data pipelines for low latency and high availability to meet the dynamic needs of the retail business
Implement strategies for efficient handling of high-velocity data (real-time inventory, demand forecasting, customer preferences, customer 360)
Stay updated with the latest Snowflake features, retail analytics trends, and data engineering best practices
Design and implement frameworks for data quality, governance, and lineage tracking
Implement and maintain robust security measures to protect sensitive data from unauthorized access and breaches
Ensure data practices align with privacy regulations such as GDPR, CCPA, or other relevant policies to your industry
Manage data access controls, ensuring that only authorized users have access to sensitive information
Requirements:
6 to 8 years of experience in a related field
Diploma in Computer Science, Data Engineering, or a related field
Extensive experience with Snowflake, including Snowflake-specific capabilities like virtual warehouses, zero-copy cloning, and Snowpipe
Proficiency in Python, SQL, and Java or Scala for large-scale data processing
Hands-on experience with Kafka, Spark, or similar tools for streaming and batch processing
Advanced knowledge of AWS, Azure, or GCP
experience with integrating Snowflake into cloud ecosystems such AWS
Data Integration: Proficiency with ETL/ELT tools like Fivetran, Matillion, or Informatica
Strong project management skills to deliver on complex, multi-stakeholder data projects
Excellent communication skills to collaborate with both technical and non-technical stakeholders
What we offer:
Sweet discount on the coolest fits
Room to grow in a rapidly expanding brand
Surrounded by smart and passionate people
A group RRSP/DPSP plan, which includes a very generous match from Psycho Bunny!
On-site gym and on-site cafeteria / bistro with subsidized meals, including breakfast and lunch
Three (3) weeks of vacation
Six (6) wellness days and your birthday off, on us