This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Azure Data Engineer role involves designing and maintaining ETL pipelines using Azure tools. Candidates should have 5-8 years of experience with SQL, Python, and data modeling. Collaboration with cross-functional teams is essential for ensuring data quality and performance. This position offers an opportunity to work in a dynamic environment focused on data-driven solutions.
Job Responsibility:
Design, build, and maintain Azure‑based ETL pipelines (e.g., Data Factory, Databricks, Data Lake) to ingest, clean, transform, and aggregate compensation‑related datasets across multiple regions
Engineer upstream processes to produce 9–10 monthly aggregated output files (customer, revenue, product, sales rep, etc.), delivered 3× per month
Ensure repeatability, monitoring, orchestration, and error‑handling for all ingestion and transformation workflows
Contribute to the creation of a master stitched data file to replace Varicent’s current data‑assembly functions
Build, configure, and maintain a rules engine (ODM, Drools, or similar) to externalize business logic previously embedded in code
Translate rules and logic captured by analysts and business SMEs into scalable, testable engine components
Implement versioning, governance, and validation mechanisms for all logic used in compensation calculations
Ensure rule changes can be managed safely, reducing risk in high‑stakes compensation scenarios
Partner with data architects to implement the target‑state Azure data architecture for compensation analytics
Develop optimized, scalable physical data models aligned to business logic and downstream needs
Integrate with MDM sources and temporary EU workarounds, helping unify regional variations into a consolidated model
Build reusable, parameterized data pipelines and frameworks supporting long‑term extensibility
Work closely with business analysts, data analysts, architects, and product owners across NA and Europe
Participate in data discovery sessions, helping interpret and validate logic, rules, and data patterns
Support three Scrum teams delivering compensation modernization, ensuring clarity on transformations and dependencies
Collaborate with QA, data quality testers, and governance teams to enforce validation standards
Implement data quality checks, profiling, reconciliation, and alerting across ingestion and transformation pipelines
Engineer performance‑optimized pipelines capable of processing large, complex datasets multiple times per month
Ensure compliance with audit, traceability, and business continuity expectations for compensation data
Requirements:
5–8+ years of experience as a Data Engineer with strong hands‑on expertise in Azure (Data Factory, Databricks, Data Lake Storage, SQL, Synapse preferred)
Proven ability to build production‑grade ETL/ELT pipelines supporting complex, multi‑regional business processes
Experience designing or implementing rules engines (Drools, ODM, or similar)
Strong SQL skills and experience with data modeling, data orchestration, and pipeline optimization
Experience working in Agile Scrum teams and collaborating across global regions (U.S. and India preferred)
Ability to partner closely with analysts and business stakeholders to translate rules into technical solutions
Excellent debugging, optimization, and engineering problem‑solving skills