This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Embark on a transformative journey as a Senior Data Engineer. At Barclays, our vision is clear –to redefine the future of banking and help craft innovative solutions. As a Senior Data Engineer on our Marketing and Communications Platform team, you will drive the next wave of innovation that transforms how Barclays connects with millions of customers across the U.S. Consumer Bank. This role sits at the heart of our digital communication strategy—designing resilient and customer‑centric solutions that power personalized engagement at massive scale. You will produce end to end data warehouse products including working on tasks such as requirements gathering, architecting the schema objects, building the applications objects and creating a finished data fabric product. Your work will directly help Barclays accelerate modernization, enhance customer experience, and deliver high‑impact capabilities across our channels.
Job Responsibility:
Design, develop, and maintain end‑to‑end ETL/ELT pipelines on AWS and Snowflake data models, schemas, virtual warehouses, and data sharing
Data ingestion, transformation, validation, and orchestration for structured and semi‑structured data and build Python‑based data pipelines, reusable frameworks, automation, and unit tests
Implement orchestration using Airflow, AWS Glue, Lambda, DBT, or similar tools and ensure data quality, reliability, monitoring, logging, and alerting across pipelines and apply data security, governance, and access controls (RBAC, masking, compliance standards)
Translate business requirements into technical designs, including source‑to‑target mappings and runbooks and provide production support, issue resolution, root cause evaluation, and minimize downtime
Support CI/CD, source control, code reviews, and adherence to development standards and contribute to logical and physical data models and system integration testing
Requirements:
Design, develop, and maintain end‑to‑end ETL/ELT pipelines on AWS and Snowflake data models, schemas, virtual warehouses, and data sharing
Data ingestion, transformation, validation, and orchestration for structured and semi‑structured data and build Python‑based data pipelines, reusable frameworks, automation, and unit tests
Implement orchestration using Airflow, AWS Glue, Lambda, DBT, or similar tools and ensure data quality, reliability, monitoring, logging, and alerting across pipelines and apply data security, governance, and access controls (RBAC, masking, compliance standards)
Translate business requirements into technical designs, including source‑to‑target mappings and runbooks and provide production support, issue resolution, root cause evaluation, and minimize downtime
Support CI/CD, source control, code reviews, and adherence to development standards and contribute to logical and physical data models and system integration testing
Nice to have:
Experience with enterprise-scale message streaming and eventing platforms (such as Kafka, AWS Kinesis), enabling real-time data ingestion, event-driven architectures, and low-latency data pipelines
Problem-solving and evaluative capability, providing technical guidance in designing, troubleshooting, and evolving multi-faceted data platforms, architectures, and system integrations
Stakeholder and partner guidance, effectively working across business, product, compliance, and global delivery teams—driving alignment and delivering solutions
Deep familiarity with the banking and financial services domain, including security, compliance, data protection, and regulatory expectations for Data and cloud-based solutions
progressive experience delivering enterprise-scale Data Warehouse and engineering solutions