This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Senior Data Engineer role driving Circle K's cloud-first strategy to unlock the power of data across the company. This position will play a key role partnering with Technical Development stakeholders to enable analytics for long term success, creating, troubleshooting and supporting ETL pipelines and cloud infrastructure.
Job Responsibility:
Collaborate with business stakeholders and other technical team members to acquire and migrate data sources
Determine solutions that are best suited to develop a pipeline for a particular data source
Develop data flow pipelines to extract, transform, and load data from various data sources
Efficient in ETL/ELT development using Azure cloud services and Snowflake
Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines
Provide clear documentation for delivered solutions and processes
Identify and implement internal process improvements for data management
Stay current with and adopt new tools and applications
Build cross-platform data strategy to aggregate multiple sources
Proactive in stakeholder communication, mentor/guide junior resources
Requirements:
Bachelor's Degree in Computer Engineering, Computer Science or related discipline
Master's Degree preferred
5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
5+ years of experience with setting up and operating data pipelines using Python or SQL
5+ years of advanced SQL Programming: PL/SQL, T-SQL
5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization
Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
5+ years of experience in defining and enabling data quality standards for auditing, and monitoring
Strong analytical abilities and a strong intellectual curiosity
In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts
Understanding of REST and good API design
Experience working with Apache Iceberg, Delta tables and distributed computing frameworks
Strong collaboration and teamwork skills
Excellent written and verbal communications skills
Self-starter and motivated with ability to work in a fast-paced development environment
Agile experience highly desirable
Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools
Nice to have:
ADF, Databricks and Azure certification is a plus
Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks
Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.