Job Description:
We are currently seeking a Data Engineer Senior Consultant to join our team in Bengaluru, Karnātaka (IN-KA), India (IN). Job Duties: ETL Engineering & Data Pipeline Development - Design, build, and maintain Azure‑based ETL pipelines (e.g., Data Factory, Databricks, Data Lake) to ingest, clean, transform, and aggregate compensation‑related datasets across multiple regions. Engineer upstream processes to produce 9–10 monthly aggregated output files (customer, revenue, product, sales rep, etc.), delivered 3× per month. Ensure repeatability, monitoring, orchestration, and error‑handling for all ingestion and transformation workflows. Contribute to the creation of a master stitched data file to replace Varicent’s current data‑assembly functions. Business Rules Engine Development - Build, configure, and maintain a rules engine (ODM, Drools, or similar) to externalize business logic previously embedded in code. Translate rules and logic captured by analysts and business SMEs into scalable, testable engine components. Implement versioning, governance, and validation mechanisms for all logic used in compensation calculations. Ensure rule changes can be managed safely, reducing risk in high‑stakes compensation scenarios. Data Modeling & Modern Architecture Implementation - Partner with data architects to implement the target‑state Azure data architecture for compensation analytics. Develop optimized, scalable physical data models aligned to business logic and downstream needs. Integrate with MDM sources and temporary EU workarounds, helping unify regional variations into a consolidated model. Build reusable, parameterized data pipelines and frameworks supporting long‑term extensibility. Cross‑Functional Collaboration - Work closely with business analysts, data analysts, architects, and product owners across NA and Europe. Participate in data discovery sessions, helping interpret and validate logic, rules, and data patterns. Support three Scrum teams delivering compensation modernization, ensuring clarity on transformations and dependencies. Collaborate with QA, data quality testers, and governance teams to enforce validation standards. Quality, Performance & Reliability - Implement data quality checks, profiling, reconciliation, and alerting across ingestion and transformation pipelines. Engineer performance‑optimized pipelines capable of processing large, complex datasets multiple times per month. Ensure compliance with audit, traceability, and business continuity expectations for compensation data.