This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Data Engineer (Databricks & PySpark). Role Type: 6-Month Contract (Subcon). Location: Bracknell, Hybrid (2 days/week in-office). Role Overview: We are seeking a mid-to-senior Data Engineer to build, optimize, and maintain scalable data pipelines within a cloud-distributed computing environment. You will act as an expert data wrangler, ensuring optimal data delivery architecture for software developers, data analysts, and data scientists.
Job Responsibility:
Develop and operationalize reliable ETL/ELT pipelines for large, complex datasets
Assemble and transform unstructured data (JSON) into actionable insights
Perform logical data modeling, physical database optimization, and security implementation
Improve data integrity by implementing automated Data Quality checks
Provide On-Call support to unblock users and resolve high-severity pipeline issues
Collaborate with Data Science teams to streamline data delivery for advanced analytics
Requirements:
5+ years in Data Engineering and Pipeline Operationalization
3+ years of hands-on experience with Databricks and PySpark
3+ years of experience within AWS ecosystems
3+ years of experience processing JSON and complex semi-structured data