CrawlJobs Logo

Data Provisioning Engineer

barclays.co.uk Logo

Barclays

Location Icon

Location:
Czechia , Prague

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Join Barclays as a Data Provisioning Engineer within the Client Analytics programme, a newly established initiative delivering analytics applications for the Investment Bank by removing manual, report-driven processes. In this role, you will be a dedicated resource supporting the implementation of new market data feeds, working directly with internal and external providers to onboard, model, and integrate data into Barclays’ strategic data platforms. You’ll enable reliable data delivery via APIs and batch feeds, maintain end-to-end pipelines, configure AWS storage, and collaborate closely with a wider engineering and analytics team to ensure high-quality, scalable data solutions that underpin emerging client analytics capabilities.

Job Responsibility:

  • Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data
  • Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures
  • Development of processing and analysis algorithms fit for the intended data complexity and volumes
  • Collaboration with data scientist to build and deploy machine learning models

Requirements:

  • Experience with market data ingestion and provisioning mechanisms, such as SFTP, REST APIs, vendor SDKs, scheduled file drops, and polling processes
  • Strong AWS cloud engineering expertise
  • Proven experience in database modelling and data persistence
  • Experience with data quality controls, validation, and monitoring
  • DevOps experience, including setting up and maintaining data pipelines for ingesting, processing, and delivering external data across platforms and systems
  • Solid knowledge of market data providers, such as FactSet, Dealogic, LSEG, and Bloomberg

Nice to have:

  • Knowledge of Python
  • Advanced SQL scripting and query optimisation skills
What we offer:
  • Competitive holiday allowance
  • Life assurance
  • Private medical care
  • Pension contribution

Additional Information:

Job Posted:
March 01, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Provisioning Engineer

Sr. Data Engineer

We are looking for a Sr. Data Engineer to join our team.
Location
Location
Salary
Salary:
Not provided
bostondatapro.com Logo
Boston Data Pro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Data Engineering: 8 years (Preferred)
  • Data Programming languages: 5 years (Preferred)
  • Data Developers: 5 years (Preferred)
Job Responsibility
Job Responsibility
  • Designs and implements standardized data management procedures around data staging, data ingestion, data preparation, data provisioning, and data destruction
  • Ensures quality of technical solutions as data moves across multiple zones and environments
  • Provides insight into the changing data environment, data processing, data storage and utilization requirements for the company, and offer suggestions for solutions
  • Ensures managed analytic assets to support the company’s strategic goals by creating and verifying data acquisition requirements and strategy
  • Develops, constructs, tests, and maintains architectures
  • Aligns architecture with business requirements and use programming language and tools
  • Identifies ways to improve data reliability, efficiency, and quality
  • Conducts research for industry and business questions
  • Deploys sophisticated analytics programs, machine learning, and statistical methods to efficiently implement solutions
  • Prepares data for predictive and prescriptive modeling and find hidden patterns using data
Read More
Arrow Right

Sr. Data Engineer

We are looking for a Sr. Data Engineer to join our team.
Location
Location
Salary
Salary:
Not provided
bostondatapro.com Logo
Boston Data Pro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Data Engineering: 8 years (Preferred)
  • Data Programming languages: 5 years (Preferred)
  • Data Developers: 5 years (Preferred)
Job Responsibility
Job Responsibility
  • Designs and implements standardized data management procedures around data staging, data ingestion, data preparation, data provisioning, and data destruction
  • Ensures quality of technical solutions as data moves across multiple zones and environments
  • Provides insight into the changing data environment, data processing, data storage and utilization requirements for the company, and offer suggestions for solutions
  • Ensures managed analytic assets to support the company’s strategic goals by creating and verifying data acquisition requirements and strategy
  • Develops, constructs, tests, and maintains architectures
  • Aligns architecture with business requirements and use programming language and tools
  • Identifies ways to improve data reliability, efficiency, and quality
  • Conducts research for industry and business questions
  • Deploys sophisticated analytics programs, machine learning, and statistical methods to efficiently implement solutions
  • Prepares data for predictive and prescriptive modeling and find hidden patterns using data
Read More
Arrow Right

Senior Consultant - Data Architecture & Engineering

Are you looking for a role that motivates and challenges you? Are you ready for ...
Location
Location
Philippines
Salary
Salary:
Not provided
3cloudsolutions.com Logo
3Cloud
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Expertise in designing and implementing logical and physical data models for cloud and hybrid data warehouse environments
  • Implementing data architectures to support a variety of data formats and structures including structured, semi-structured and unstructured data
  • Experience with multiple full life-cycle data warehouse implementations
  • Understanding of data architectures required to support data integration processing
  • Experience with data modeling technologies such as ER/Studio, ER/Win or similar
  • Experience with Microsoft Azure Data Platform services including Azure Data Lake Store, Azure Storage, Azure Synapse, Azure Data Factory, Azure SQL database, Logic Apps, APIs
  • Demonstrated ability to quickly learn, adopt and apply new technologies
  • Data profiling and creation of source to target mappings
  • Ability to provision and configure Azure data service resources
  • Python & SQL Scripting
What we offer
What we offer
  • Competitive compensation package, salary, allowance, standard benefits including quarterly and annual performance-based cash bonus and other remuneration
  • Great working environment and company culture with flexible work location
  • Plenty of exciting opportunities to grow your skills in Microsoft and Azure
  • Access to cutting-edge Azure work opportunities and Microsoft teams
  • Fulltime
Read More
Arrow Right

Automation NoSQL Data Engineer

HPE Operations is our innovative IT services organization. It provides the exper...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Information Systems, or equivalent
  • 7+ years of demonstrated experience working in software development teams with a strong focus on NoSQL databases and distributed data systems
  • Strong experience in automated deployment, troubleshooting, and fine-tuning technologies such as Apache Cassandra, Clickhouse, MongoDB, Apache Spark, Apache Flink, Apache Airflow, and similar technologies
  • Strong knowledge of NoSQL databases such as Apache Cassandra, Clickhouse, and MongoDB, including their installation, configuration, and performance tuning in production environments
  • Expertise in deploying and managing real-time data processing pipelines using Apache Spark, Apache Flink, and Apache Airflow
  • Experience in deploying and managing Apache Spark and Apache Flink operators on Kubernetes and other containerized environments, ensuring high availability and scalability of data processing jobs
  • Hands-on experience in configuring and optimizing Apache Spark and Apache Flink clusters, including fine-tuning resource allocation, fault tolerance, and job execution
  • Proficiency in authoring, automating, and optimizing Apache Airflow DAGs for orchestrating complex data workflows across Spark and Flink jobs
  • Strong experience with container orchestration platforms (like Kubernetes) to deploy and manage Spark/Flink operators and data pipelines
  • Proficiency in creating, managing, and optimizing Airflow DAGs to automate data pipeline workflows, handle retries, task dependencies, and scheduling
Job Responsibility
Job Responsibility
  • Think through complex data engineering problems in a fast-paced environment and drive solutions to reality
  • Work in a dynamic, collaborative environment to build DevOps-centered data solutions using the latest technologies and tools
  • Provide engineering-level support for data tools and systems deployed in customer environments
  • Respond quickly and professionally to customer emails/requests for assistance
What we offer
What we offer
  • Health & Wellbeing
  • Personal & Professional Development
  • Unconditional Inclusion
  • Fulltime
Read More
Arrow Right

Principal Data Infrastructure Engineer

As Microsoft continues to push the boundaries of AI, we are on the lookout for p...
Location
Location
United States , Redmond
Salary
Salary:
139900.00 - 274800.00 USD / Year
https://www.microsoft.com/ Logo
Microsoft Corporation
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years experience in business analytics, data science, software development, data modeling, or data engineering
  • OR Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 6+ years experience in business analytics, data science, software development, data modeling, or data engineering
  • OR equivalent experience
  • 4+ years in Big Data Infrastructure, DevOps, SRE, or Platform Engineering
  • 3+ years of hands-on experience managing and scaling distributed systems—from bare-metal to cloud-native environments
  • 2+ years deploying containerized applications using Kubernetes and Helm/Kustomize
  • Solid scripting and automation skills using Python, Bash, or PowerShell
  • Proven success in CI/CD pipeline management, release automation, and production troubleshooting
  • Experience working with Databricks for scalable data processing and analytics
  • Familiarity with security practices in infrastructure environments, including IAM, OAuth, and Kerberos administration
Job Responsibility
Job Responsibility
  • Architect and maintain scalable, reliable, and observable Big Data Infrastructure for mission-critical AI applications
  • Champion DevOps and SRE best practices—automated deployments, service monitoring, and incident response
  • Build a self-service big data platform that empowers data and platform engineers and researchers
  • Develop robust CI/CD pipelines and automate infrastructure provisioning using Infrastructure as Code tools (Bicep, Terraform, ARM)
  • Collaborate with Data Engineers, Data Scientists, AI Researchers, and Developers to deliver secure, seamless big data workflows
  • Lead technical design reviews and uphold a clean, secure, and well-documented codebase
  • Proactively identify and resolve bottlenecks in data pipelines and infrastructure
  • Optimize system performance across storage, compute, and analytics layers
  • Partner with Security teams to enhance system security (IAM, OAuth, Kerberos)
  • Embody and promote Microsoft’s values: Respect, Integrity, Accountability, and Inclusion
  • Fulltime
Read More
Arrow Right

Senior Database Engineer

We’re looking for a skilled Data Reliability Engineer to join our team for a cli...
Location
Location
Salary
Salary:
Not provided
zoolatech.com Logo
Zoolatech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in Data Engineering, Database Reliability, or Infrastructure Operations
  • Strong expertise in PostgreSQL on AWS, including tuning, replication, backups, and HA configurations
  • Experience operating RDBMS databases (PostgreSQL, MySQL, etc.) and Kubernetes technologies is highly desirable
  • Experience provisioning and operating NoSQL databases at scale like Elasticsearch, Elastic Cache, DynamoDB, Neo4j, Mongo, Cassandra, etc.
  • Advanced SQL scripting and query optimization skills
  • Experience with data systems monitoring, alerting, and performance tuning
  • Strong programming/scripting in Java, Python, or Shell
  • Proven experience in designing or supporting complex data ecosystems
  • Solid understanding of cloud infrastructure (preferably AWS) and Infrastructure as Code tools (Terraform)
  • Familiarity with event streaming platforms (Kafka), and observability stacks (New Relic, ELK, etc.)
Job Responsibility
Job Responsibility
  • Own and optimize the reliability, availability, and performance of data infrastructure across production systems
  • Lead the design and implementation of resilient, secure, and observable data systems
  • Collaborate with SRE, Security, and Engineering teams to enforce data infrastructure standards and align on architectural decisions
  • Design and implement automation around provisioning, uptime monitoring, data refresh, integrity, backups, and disaster recovery
  • Support application developers with performance tuning, complex query optimization, and database design reviews
  • Analyze and resolve performance bottlenecks and incidents with a focus on long-term solutions
  • Participate in on-call rotation to support production systems and ensure high availability
  • Actively contribute to improving incident response and observability through metrics, alerting, and runbooks
  • Work with technologies such as Java, Ruby on Rails, PostgreSQL, AWS, Kafka, S3, Elasticsearch
What we offer
What we offer
  • Paid Vacation
  • Sick Days
  • Floating Holidays
  • Sport/Insurance Compensation
  • English Classes
  • Charity
  • Training Compensation
Read More
Arrow Right

Senior Database Engineer

We’re looking for a skilled Data Reliability Engineer to join our team for a cli...
Location
Location
United States
Salary
Salary:
Not provided
zoolatech.com Logo
Zoolatech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in Data Engineering, Database Reliability, or Infrastructure Operations
  • Strong expertise in PostgreSQL on AWS, including tuning, replication, backups, and HA configurations
  • Experience operating RDBMS databases (PostgreSQL, MySQL, etc.) and Kubernetes technologies is highly desirable
  • Experience provisioning and operating NoSQL databases at scale like Elasticsearch, Elastic Cache, DynamoDB, Neo4j, Mongo, Cassandra, etc.
  • Advanced SQL scripting and query optimization skills
  • Experience with data systems monitoring, alerting, and performance tuning
  • Strong programming/scripting in Java, Python, or Shell
  • Proven experience in designing or supporting complex data ecosystems
  • Solid understanding of cloud infrastructure (preferably AWS) and Infrastructure as Code tools (Terraform)
  • Familiarity with event streaming platforms (Kafka), and observability stacks (New Relic, ELK, etc.)
Job Responsibility
Job Responsibility
  • Own and optimize the reliability, availability, and performance of data infrastructure across production systems
  • Lead the design and implementation of resilient, secure, and observable data systems
  • Collaborate with SRE, Security, and Engineering teams to enforce data infrastructure standards and align on architectural decisions
  • Design and implement automation around provisioning, uptime monitoring, data refresh, integrity, backups, and disaster recovery
  • Support application developers with performance tuning, complex query optimization, and database design reviews
  • Analyze and resolve performance bottlenecks and incidents with a focus on long-term solutions
  • Participate in on-call rotation to support production systems and ensure high availability
  • Actively contribute to improving incident response and observability through metrics, alerting, and runbooks
  • Work with technologies such as Java, Ruby on Rails, PostgreSQL, AWS, Kafka, S3, Elasticsearch
What we offer
What we offer
  • Paid Vacation
  • Sick Days
  • Floating Holidays
  • Sport/Insurance Compensation
  • English Classes
  • Charity
  • Training Compensation
  • Fulltime
Read More
Arrow Right

Senior Data Engineer - Data Platform

We are looking for a Senior Data Engineer - Data Platform to join our Data & AI ...
Location
Location
France , Paris
Salary
Salary:
Not provided
doctolib.fr Logo
Doctolib
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • More than 7 years of experience as Site Reliability Engineer, Data Ops, Data Platform Engineer or in a similar role, with a proven track record of building and maintaining complex data infrastructures
  • Strong proficiency in data engineering and infrastructure tools and technologies, such as stream and events processing (Kafka, PubSub, Firehose) and Kubernetes
  • Expertise in programming languages like Python
  • Familiar with cloud infrastructure and services, preferably AWS, Azure, or GCP, and have experience with infrastructure-as-code tools such as Terraform
  • Excellent problem-solving skills with a focus on identifying and resolving data infrastructure bottlenecks and performance issues
Job Responsibility
Job Responsibility
  • Design and implement a scalable and reliable data infrastructure that supports the collection, processing, storage, and analysis of large-scale datasets while pushing security and privacy best practices
  • Build and maintain data pipelines that efficiently extract, transform, and load data from various sources into our data warehouse
  • Implement automation and orchestration tools to streamline infrastructure provisioning, data workflows, reduce manual effort, and improve operational efficiency
  • Monitor data platform for performance and reliability, identify and troubleshoot issues, and implement proactive solutions to ensure data quality and availability
  • Streamline and monitor platform costs, identify optimizations and saving opportunities while collaborating with data engineers, data scientists, and other stakeholders
What we offer
What we offer
  • Free comprehensive health insurance for you and your children
  • Parent Care Program: receive one additional month of leave on top of the legal parental leave
  • Free mental health and coaching services through our partner Moka.care
  • For caregivers and workers with disabilities, a package including an adaptation of the remote policy, extra days off for medical reasons, and psychological support
  • Work from EU countries and the UK for up to 10 days per year, thanks to our flexibility days policy
  • Up to 14 days of RTT
  • A subsidy from the work council to refund part of the membership to a sport club or a creative class
  • Lunch voucher with Swile card
  • Fulltime
Read More
Arrow Right