This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As an Azure and Databricks Data Engineer, you will be responsible for designing, building and supporting the data driven applications which enable innovative, customer centric digital experiences.
Job Responsibility:
Designing, building and supporting the data driven applications which enable innovative, customer centric digital experiences
Building reliable, supportable & performant data lake & data warehouse products
Employing best practice in development, security, and accessibility
Building and productionizing modular and scalable data ELT/ETL pipelines and data infrastructure
Building curated common data models designed by the Data Modelers
Working closely with infrastructure, and cyber teams and Senior Data Developers to ensure data is secure
Cleaning, preparing and optimizing datasets for performance
Supporting Business Intelligence Analysts in modelling data for visualization and reporting
Troubleshooting issues related to ingestion, data transformation and pipeline performance
Collaborating with Business Analysts, data scientists, Senior Data Engineers, data Data Analysts and, solution Architects and Data Modelers
Assisting in identifying, designing, and implementing internal process improvements
Working with tools in the Microsoft Stack
Working within the agile SCRUM work management framework
Assisting in building data catalog and maintenance of relevant metadata
Developing optimized, performant data pipelines and models at scale using technologies such as Python, Spark and SQL
Documenting as-built pipelines and data products
Implementing orchestration of data pipeline execution
Creating tooling in collaboration with senior data engineers and data architects
Working with Continuous Integration/Continuous Delivery and DevOps pipelines
Monitoring the ongoing operation of in-production solutions
Implementing and managing appropriate access to data products
Writing and performing automated unit and regression testing for data product builds
Participating in peer code review sessions
Requirements:
Completion of a four-year University education in computer science, computer/software engineering or other relevant programs within data engineering, data analysis, artificial intelligence, or machine learning
Experience as a Data Engineer designing and building data pipelines
Fluent in creating data processing frameworks using Python, PySpark, SparkSQL and SQL
Experience with Azure Data Factory, ADLS, Synapse Analytics and Databricks
Experience building data pipelines for Data Lakehouses and Data Warehouses
Good understanding of data structures and data processing frameworks
Knowledge of data governance and data quality principles
Effective communication skills to translate technical details to non-technical stakeholders