CrawlJobs Logo

Senior Data Engineer

Blis

Location Icon

Location:
United Kingdom, Edinburgh

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Come work on fantastically high-scale systems with us! Blis is an award-winning, global leader and technology innovator in big data analytics and advertising. We help brands such as McDonald's, Samsung, and Mercedes Benz to understand and effectively reach their best audiences. We are looking for solid and experienced Data Engineers to work on building out secure, automated, scalable pipelines on GCP. We receive over 350gb of data an hour and respond to 400,000 decision requests each second, with petabytes of analytical data to work with. We tackle challenges across almost every major discipline of data science, including classification, clustering, optimisation, and data mining. You will be responsible for building stable production level pipelines maximising the efficiency of cloud compute to ensure that data is properly enabled for operational and scientific cause. This is a growing team with big responsibilities and exciting challenges ahead of it, as we look to reach the next 10x level of scale and intelligence.

Job Responsibility:

  • Design, build, monitor, and support large scale data processing pipelines
  • Support, mentor, and pair with other members of the team to advance our team’s capabilities and capacity
  • Help Blis explore and exploit new data streams to innovative and support commercial and technical growth
  • Work closely with Product and be comfortable with taking, making and delivering against fast paced decisions to delight our customers

Requirements:

  • 5+ years direct experience delivering robust performant data pipelines within the constraints of direct SLA’s and commercial financial footprints
  • Proven experience in architecting, developing, and maintaining Apache Druid and Imply platforms, with a focus on DevOps practices and large-scale system re-architecture
  • Mastery of building Pipelines in GCP maximising the use of native and native supporting technologies e.g. Apache Airflow
  • Mastery of Python for data and computational tasks with fluency in data cleansing, validation and composition techniques
  • Hands-on implementation and architectural familiarity with all forms of data sourcing i.e streaming data, relational and non-relational databases, and distributed processing technologies (e.g. Spark)
  • Fluency with all appropriate python libraries typical of data science e.g. pandas, scikit-learn, scipy, numpy, MLlib and/or other machine learning and statistical libraries
  • Advanced knowledge of cloud based services specifically GCP
  • Excellent working understanding of server-side Linux
  • Professional in managing and updating on tasks ensuring appropriate levels of documentation, testing and assurance around their solutions

Nice to have:

  • Experience optimizing both code and config in Spark, Hive, or similar tools
  • Practical experience working with relational databases, including advanced operations such as partitioning and indexing
  • Knowledge and experience with tools like AWS Athena or Google BigQuery to solve data-centric problems
  • Understanding and ability to innovate, apply, and optimize complex algorithms and statistical techniques to large data structures
  • Experience with Python Notebooks, such as Jupyter, Zeppelin, or Google Datalab to analyze, prototype, and visualize data and algorithmic output

Additional Information:

Job Posted:
December 06, 2025

Work Type:
Hybrid work
Job Link Share:
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.