CrawlJobs Logo

Senior DevOps Engineer - Data Platform

doctolib.fr Logo

Doctolib

Location Icon

Location:
France , Paris

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We are looking for a Senior DevOps Engineer - Data Platform to join the Data and AI Platform team. As a Senior DevOps Engineer focus on our Data Platform, your mission will be to improve and maintain an efficient, reliable and scalable platform that enables Data Product developers and owners to develop, deploy and maintain their data products autonomously, at scale, with clear and maintained interfaces and full observability, ensuring seamless data flow across the organization and enabling data-driven decision-making.

Job Responsibility:

  • Maintain Data Product Controller to give stakeholders full responsibility for managing their data products in an automated way (via CI/CD and infrastructure as code) securely, reliably, governed and with full ownership
  • Maintain Data and AI Platform orchestrator (Dagster) to give the possibility to Data Product developers to orchestrate their Data Product in a decentralized way (Data Mesh), by owning their release process and job pipelines
  • Monitor data platform for performance and reliability, identify and troubleshoot issues, and implement proactive solutions to ensure availability
  • Offer observability components to enable developer teams and data product consumers to have the right level of knowledge of costs, data quality and data lineage
  • Monitor platform costs, identify optimizations and saving opportunities while collaborating with data engineers, data scientists, and other stakeholders

Requirements:

  • More than 5 years of experience as a DataOps Engineer or in a similar role, with a proven track record of building and maintaining complex data infrastructures
  • Strong proficiency in data engineering and infrastructure tools and technologies (Kubernetes, ArgoCD, Crossplane)
  • Expertise in programming languages like Python
  • Familiar with cloud infrastructure and services, preferably GCP, and have experience with infrastructure-as-code tools such as Terraform
  • Excellent problem-solving skills with a focus on identifying and resolving data infrastructure bottlenecks and performance issues

Nice to have:

  • Knowledge of data governance principles and best practices for data security
  • Experience with continuous integration and continuous delivery (CI/CD) pipelines for data
What we offer:
  • Free comprehensive health insurance for you and your children
  • Parent Care Program: receive one additional month of leave on top of the legal parental leave
  • Free mental health and coaching services through our partner Moka.care
  • For caregivers and workers with disabilities, a package including an adaptation of the remote policy, extra days off for medical reasons, and psychological support
  • Work from EU countries and the UK for up to 10 days per year, thanks to our flexibility days policy
  • Up to 14 days of RTT
  • A subsidy from the work council to refund part of the membership to a sport club or a creative class
  • Lunch voucher with Swile card

Additional Information:

Job Posted:
February 17, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior DevOps Engineer - Data Platform

Senior Data Engineer

At Ingka Investments (Part of Ingka Group – the largest owner and operator of IK...
Location
Location
Netherlands , Leiden
Salary
Salary:
Not provided
https://www.ikea.com Logo
IKEA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Formal qualifications (BSc, MSc, PhD) in computer science, software engineering, informatics or equivalent
  • Minimum 3 years of professional experience as a (Junior) Data Engineer
  • Strong knowledge in designing efficient, robust and automated data pipelines, ETL workflows, data warehousing and Big Data processing
  • Hands-on experience with Azure data services like Azure Databricks, Unity Catalog, Azure Data Lake Storage, Azure Data Factory, DBT and Power BI
  • Hands-on experience with data modeling for BI & ML for performance and efficiency
  • The ability to apply such methods to solve business problems using one or more Azure Data and Analytics services in combination with building data pipelines, data streams, and system integration
  • Experience in driving new data engineering developments (e.g. apply new cutting edge data engineering methods to improve performance of data integration, use new tools to improve data quality and etc.)
  • Knowledge of DevOps practices and tools including CI/CD pipelines and version control systems (e.g., Git)
  • Proficiency in programming languages such as Python, SQL, PySpark and others relevant to data engineering
  • Hands-on experience to deploy code artifacts into production
Job Responsibility
Job Responsibility
  • Contribute to the development of D&A platform and analytical tools, ensuring easy and standardized access and sharing of data
  • Subject matter expert for Azure Databrick, Azure Data factory and ADLS
  • Help design, build and maintain data pipelines (accelerators)
  • Document and make the relevant know-how & standard available
  • Ensure pipelines and consistency with relevant digital frameworks, principles, guidelines and standards
  • Support in understand needs of Data Product Teams and other stakeholders
  • Explore ways create better visibility on data quality and Data assets on the D&A platform
  • Identify opportunities for data assets and D&A platform toolchain
  • Work closely together with partners, peers and other relevant roles like data engineers, analysts or architects across IKEA as well as in your team
What we offer
What we offer
  • Opportunity to develop on a cutting-edge Data & Analytics platform
  • Opportunities to have a global impact on your work
  • A team of great colleagues to learn together with
  • An environment focused on driving business and personal growth together, with focus on continuous learning
  • Fulltime
Read More
Arrow Right

Senior Software Engineer - Transactional Data Platform

As a Senior Software Engineer, you will play a critical role in designing, build...
Location
Location
Australia , Sydney
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related technical field
  • 5+ years of experience in backend software development
  • 3+ years of hands-on experience working with AWS cloud services, particularly AWS storage technologies (S3, DynamoDB, EBS, EFS, FSx, or Glacier)
  • 3+ years of experience in designing and developing distributed systems or high-scale backend services
  • Strong programming skills in Kotlin
  • Experience working in agile environments following DevOps and CI/CD best practices
  • Strong Backend Development Skills
  • Proficiency in Kotlin, Java for backend development
  • Experience building high-performance, scalable microservices and APIs
  • Strong understanding of RESTful APIs, gRPC, and event-driven architectures
Job Responsibility
Job Responsibility
  • Designing, building, and optimizing high-performance, scalable, and resilient backend storage solutions on AWS cloud infrastructure
  • Developing distributed storage systems, APIs, and backend services that power mission-critical applications, ensuring low-latency, high-throughput, and fault-tolerant data storage
  • Collaborating closely with principal engineers, architects, SREs, and product teams to define technical roadmaps, improve storage efficiency, and optimize access patterns
  • Driving performance tuning, data modeling, caching strategies, and cost optimization across AWS storage services like S3, DynamoDB, EBS, EFS, FSx, and Glacier
  • Contributing to infrastructure automation, security best practices, and monitoring strategies using tools like Terraform, CloudWatch, Prometheus, and OpenTelemetry
  • Troubleshooting and resolving production incidents related to data integrity, latency spikes, and storage failures, ensuring high availability and disaster recovery preparedness
  • Mentoring junior engineers, participating in design reviews and architectural discussions, and advocating for engineering best practices such as CI/CD automation, infrastructure as code, and observability-driven development
What we offer
What we offer
  • Atlassians can choose where they work – whether in an office, from home, or a combination of the two
  • Flexibility for eligible candidates to work remotely across the West US
  • Fulltime
Read More
Arrow Right

Senior SSE Data Engineer

Designs, develops, troubleshoots and debugs software programs for software enhan...
Location
Location
Israel , Tel Aviv
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's or Master's degree in Computer Science, Information Systems, or equivalent
  • Typically 6-10 years experience
  • Extensive experience with multiple software systems design tools and languages
  • Excellent analytical and problem solving skills
  • Experience in overall architecture of software systems for products and solutions
  • Designing and integrating software systems running on multiple platform types into overall architecture
  • Evaluating forms and processes for software systems testing and methodology, including writing and execution of test plans, debugging, and testing scripts and tools
  • Excellent written and verbal communication skills
  • mastery in English and local language
  • Ability to effectively communicate product architectures, design proposals and negotiate options at senior management levels
Job Responsibility
Job Responsibility
  • Leads multiple project teams of other software systems engineers and internal and outsourced development partners responsible for all stages of design and development for complex products and platforms, including solution design, analysis, coding, testing, and integration
  • Manages and expands relationships with internal and outsourced development partners on software systems design and development
  • Reviews and evaluates designs and project activities for compliance with systems design and development guidelines and standards
  • provides tangible feedback to improve product quality and mitigate failure risk
  • Provides domain-specific expertise and overall software systems leadership and perspective to cross-organization projects, programs, and activities
  • Drives innovation and integration of new technologies into projects and activities in the software systems design organization
  • Provides guidance and mentoring to less- experienced staff members
What we offer
What we offer
  • Health & Wellbeing
  • Personal & Professional Development
  • Unconditional Inclusion
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Come work on fantastically high-scale systems with us! Blis is an award-winning,...
Location
Location
United Kingdom , Edinburgh
Salary
Salary:
Not provided
blis.com Logo
Blis
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years direct experience delivering robust performant data pipelines within the constraints of direct SLA’s and commercial financial footprints
  • Proven experience in architecting, developing, and maintaining Apache Druid and Imply platforms, with a focus on DevOps practices and large-scale system re-architecture
  • Mastery of building Pipelines in GCP maximising the use of native and native supporting technologies e.g. Apache Airflow
  • Mastery of Python for data and computational tasks with fluency in data cleansing, validation and composition techniques
  • Hands-on implementation and architectural familiarity with all forms of data sourcing i.e streaming data, relational and non-relational databases, and distributed processing technologies (e.g. Spark)
  • Fluency with all appropriate python libraries typical of data science e.g. pandas, scikit-learn, scipy, numpy, MLlib and/or other machine learning and statistical libraries
  • Advanced knowledge of cloud based services specifically GCP
  • Excellent working understanding of server-side Linux
  • Professional in managing and updating on tasks ensuring appropriate levels of documentation, testing and assurance around their solutions
Job Responsibility
Job Responsibility
  • Design, build, monitor, and support large scale data processing pipelines
  • Support, mentor, and pair with other members of the team to advance our team’s capabilities and capacity
  • Help Blis explore and exploit new data streams to innovative and support commercial and technical growth
  • Work closely with Product and be comfortable with taking, making and delivering against fast paced decisions to delight our customers
Read More
Arrow Right

Senior Data Engineer

Come work on fantastically high-scale systems with us! Blis is an award-winning,...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
blis.com Logo
Blis
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years direct experience delivering robust performant data pipelines within the constraints of direct SLA’s and commercial financial footprints
  • Proven experience in architecting, developing, and maintaining Apache Druid and Imply platforms, with a focus on DevOps practices and large-scale system re-architecture
  • Mastery of building Pipelines in GCP maximising the use of native and native supporting technologies e.g. Apache Airflow
  • Mastery of Python for data and computational tasks with fluency in data cleansing, validation and composition techniques
  • Hands-on implementation and architectural familiarity with all forms of data sourcing i.e streaming data, relational and non-relational databases, and distributed processing technologies (e.g. Spark)
  • Fluency with all appropriate python libraries typical of data science e.g. pandas, scikit-learn, scipy, numpy, MLlib and/or other machine learning and statistical libraries
  • Advanced knowledge of cloud based services specifically GCP
  • Excellent working understanding of server-side Linux
  • Professional in managing and updating on tasks ensuring appropriate levels of documentation, testing and assurance around their solutions
Job Responsibility
Job Responsibility
  • Design, build, monitor, and support large scale data processing pipelines
  • Support, mentor, and pair with other members of the team to advance our team’s capabilities and capacity
  • Help Blis explore and exploit new data streams to innovative and support commercial and technical growth
  • Work closely with Product and be comfortable with taking, making and delivering against fast paced decisions to delight our customers
  • Fulltime
Read More
Arrow Right

Data (DevOps) Engineer

Ivy Partners is a Swiss consulting firm dedicated to helping businesses navigate...
Location
Location
Switzerland , Genève
Salary
Salary:
Not provided
ivy.partners Logo
IVY Partners
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Substantial experience with Apache Airflow in complex orchestration and production settings
  • Advanced skills in AWS, Databricks, and Python for data pipelines, MLOps tooling, and automation
  • Proven experience deploying volumetric and sensitive pipelines
  • Confirmed to senior level
  • Highly autonomous, capable of working in a critical and structuring environment
  • Not just a team player but someone who challenges the status quo, proposes solutions, and elevates the team
  • Communicate clearly and possess a strong sense of business urgency
Job Responsibility
Job Responsibility
  • Design and maintain high-performance data pipelines
  • Migrate large volumes of historical and operational data to AWS
  • Optimize data flows used by machine learning models for feature creation, time series, and trade signals
  • Ensure the quality, availability, and traceability of critical datasets
  • Collaborate directly with data scientists to integrate, monitor, and industrialize models: price prediction models, optimization algorithms, and automated trading systems
  • Support model execution and stability in production environments utilizing Airflow and Databricks
  • Build, optimize, and monitor Airflow DAGs
  • Automate Databricks jobs and integrate CI/CD pipelines (GitLab/Jenkins)
  • Monitor the performance of pipelines and models, and address incidents
  • Deploy robust, secure, and scalable AWS data architectures
What we offer
What we offer
  • Supportive environment where everyone is valued, with training and career advancement opportunities
  • Building a relationship based on transparency, professionalism, and commitment
  • Encouraging innovation
  • Taking responsibility
Read More
Arrow Right

Senior Azure Data Engineer

Seeking a Lead AI DevOps Engineer to oversee design and delivery of advanced AI/...
Location
Location
Poland
Salary
Salary:
Not provided
lingarogroup.com Logo
Lingaro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 6 years of professional experience in the Data & Analytics area
  • 1+ years of experience (or acting as) in the Senior Consultant or above role with a strong focus on data solutions build in Azure and Databricks/Synapse/(MS Fabric is nice to have)
  • Proven experience in Azure cloud-based infrastructure, Databricks and one of SQL implementation (e.g., Oracle, T-SQL, MySQL, etc.)
  • Proficiency in programming languages such as SQL, Python, PySpark is essential (R or Scala nice to have)
  • Very good level of communication including ability to convey information clearly and specifically to co-workers and business stakeholders
  • Working experience in the agile methodologies – supporting tools (JIRA, Azure DevOps)
  • Experience in leading and managing a team of data engineers, providing guidance, mentorship, and technical support
  • Knowledge of data management principles and best practices, including data governance, data quality, and data integration
  • Good project management skills, with the ability to prioritize tasks, manage timelines, and deliver high-quality results within designated deadlines
  • Excellent problem-solving and analytical skills, with the ability to identify and resolve complex data engineering issues
Job Responsibility
Job Responsibility
  • Act as a senior member of the Data Science & AI Competency Center, AI Engineering team, guiding delivery and coordinating workstreams
  • Develop and execute a cloud data strategy aligned with organizational goals
  • Lead data integration efforts, including ETL processes, to ensure seamless data flow
  • Implement security measures and compliance standards in cloud environments
  • Continuously monitor and optimize data solutions for cost-efficiency
  • Establish and enforce data governance and quality standards
  • Leverage Azure services, as well as tools like dbt and Databricks, for efficient data pipelines and analytics solutions
  • Work with cross-functional teams to understand requirements and provide data solutions
  • Maintain comprehensive documentation for data architecture and solutions
  • Mentor junior team members in cloud data architecture best practices
What we offer
What we offer
  • Stable employment
  • “Office as an option” model
  • Workation
  • Great Place to Work® certified employer
  • Flexibility regarding working hours and your preferred form of contract
  • Comprehensive online onboarding program with a “Buddy” from day 1
  • Cooperation with top-tier engineers and experts
  • Unlimited access to the Udemy learning platform from day 1
  • Certificate training programs
  • Upskilling support
Read More
Arrow Right

Senior Data Engineer

Our Senior Data Engineers enable public sector organisations to embrace a data-d...
Location
Location
United Kingdom , Bristol; London; Manchester; Swansea
Salary
Salary:
60000.00 - 80000.00 GBP / Year
madetech.com Logo
Made Tech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Enthusiasm for learning and self-development
  • Proficiency in Git (inc. Github Actions) and able to explain the benefits of different branch strategies
  • Gathering and meeting the requirements of both clients and users on a data project
  • Strong experience in IaC and able to guide how one could deploy infrastructure into different environments
  • Owning the cloud infrastructure underpinning data systems through a DevOps approach
  • Knowledge of handling and transforming various data types (JSON, CSV, etc) with Apache Spark, Databricks or Hadoop
  • Good understanding of the possible architectures involved in modern data system design (e.g. Data Warehouse, Data Lakes and Data Meshes) and the different use cases for them
  • Ability to create data pipelines on a cloud environment and integrate error handling within these pipelines. With an understanding how to create reusable libraries to encourage uniformity of approach across multiple data pipelines.
  • Able to document and present an end-to-end diagram to explain a data processing system on a cloud environment, with some knowledge of how you would present diagrams (C4, UML etc.)
  • To provide guidance how one would implement a robust DevOps approach in a data project. Also would be able to talk about tools needed for DataOps in areas such as orchestration, data integration and data analytics.
Job Responsibility
Job Responsibility
  • Enable public sector organisations to embrace a data-driven approach by providing data platforms and services that are high-quality, cost-efficient, and tailored to clients’ needs
  • Develop, operate, and maintain these services
  • Provide maximum value to data consumers, including analysts, scientists, and business stakeholders
  • Play one or more roles according to our clients' needs
  • Support as a senior contributor for a project, focusing on both delivering engineering work as well as upskilling members of the client team
  • Play more of a technical architect role and work with the larger MadeTech team to identify growth opportunities within the account
  • Have a drive to deliver outcomes for users
  • Make sure that the wider context of a delivery is considered and maintain alignment between the operational and analytical aspects of the engineering solution
What we offer
What we offer
  • 30 days of paid annual leave + bank holidays
  • Flexible Parental Leave
  • Part time remote working for all our staff
  • Paid counselling as well as financial and legal advice
  • Flexible benefit platform which includes a Smart Tech scheme, Cycle to work scheme, and an individual benefits allowance which you can invest in a Health care cash plan or Pension plan
  • Optional social and wellbeing calendar of events
  • Fulltime
Read More
Arrow Right