CrawlJobs Logo

Staff DevOps - Data Platform

doctolib.fr Logo

Doctolib

Location Icon

Location:
France , Paris

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We are looking for a Staff DevOps - Data Platform to join the Data and ML Platform team. Your mission will be to shape our data platform strategy and architecture, driving enterprise-scale solutions that accelerate machine learning initiatives, enable engineering excellence, and unlock business insights. You will be working in a team at the heart of Doctolib's data-driven transformation, enabling innovation through robust, scalable data infrastructure that empowers engineers, AI teams, and business across the organization.

Job Responsibility:

  • Design and implement enterprise-scale data infrastructure strategies, conducting thorough impact and cost analysis for major technical decisions, and establishing architectural standards across the organization
  • Build and optimize complex, multi-region data pipelines handling petabyte-scale datasets, ensuring 99.9% reliability and implementing advanced monitoring and alerting systems
  • Lead cost analysis initiatives, identify optimization opportunities across our data stack, and implement solutions that reduce infrastructure spend while improving performance and reliability
  • Provide technical guidance to data engineers and cross-functional teams, conduct architecture reviews, and drive adoption of best practices in DataOps, security, and governance
  • Evaluate emerging technologies, conduct proof-of-concepts for new data tools and platforms, and lead the technical roadmap for data infrastructure modernization

Requirements:

  • 7+ years of experience after graduation as a Staff Data Platform Engineer, Staff Data Ops, Staff Site Reliability Engineer, or in a similar role, with a history of architecting and scaling robust data platforms
  • Extensive experience with Google Cloud Platform and a command of Kubernetes & Terraform for automated deployments
  • Authority on implementing network and IAM security best practices
  • Deep technical proficiency in orchestrating data pipelines using Airflow or Dagster, deploying applications to the cloud, and leveraging modern data warehouses such as BigQuery
  • Highly skilled in programming with Python, and have a solid understanding of software development principles
  • Excellent troubleshooter who excels at diagnosing and fixing data infrastructure and identifying performance bottlenecks
  • Strong communicator who can articulate complex technical concepts to both technical and non-technical audiences

Nice to have:

  • Hands-on experience building and deploying APIs, with a strong preference for frameworks like FastAPI, and are skilled at designing APIs that provide reliable and efficient access to data
  • Have actively applied data governance principles to manage data quality, security, and compliance, and understand how to implement controls that protect sensitive information
  • Are experienced with CI/CD tools and methodologies for data-related projects, and can build automated pipelines that streamline development, testing, and deployment
  • Have practical experience with cloud cost optimization and FinOps principles, actively experimenting with cost-reduction strategies, and are comfortable analyzing infrastructure spending and optimizing resource allocation
What we offer:
  • Free comprehensive health insurance for you and your children
  • Parent Care Program: receive one additional month of leave on top of the legal parental leave
  • Free mental health and coaching services through our partner Moka.care
  • For caregivers and workers with disabilities, a package including an adaptation of the remote policy, extra days off for medical reasons, and psychological support
  • Work from EU countries and the UK for up to 10 days per year, thanks to our flexibility days policy
  • Work Council subsidy to refund part of sport club membership or creative class
  • Up to 14 days of RTT
  • A subsidy from the work council to refund part of the membership to a sport club or a creative class
  • Lunch voucher with Swile card

Additional Information:

Job Posted:
January 15, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Staff DevOps - Data Platform

Staff Data Engineer

We are seeking a Staff Data Engineer to architect and lead our entire data infra...
Location
Location
United States , New York; San Francisco
Salary
Salary:
170000.00 - 210000.00 USD / Year
taskrabbit.com Logo
Taskrabbit
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7-10 years of experience in Data Engineering
  • Expertise in building and maintaining ELT data pipelines using modern tools such as dbt, Airflow, and Fivetran
  • Deep experience with cloud data warehouses such as Snowflake, BigQuery, or Redshift
  • Strong data modeling skills (e.g., dimensional modeling, star/snowflake schemas) to support both operational and analytical workloads
  • Proficient in SQL and at least one general-purpose programming language (e.g., Python, Java, or Scala)
  • Experience with streaming data platforms (e.g., Kafka, Kinesis, or equivalent) and real-time data processing patterns
  • Familiarity with infrastructure-as-code tools like Terraform and DevOps practices for managing data platform components
  • Hands-on experience with BI and semantic layer tools such as Looker, Mode, Tableau, or equivalent
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable, reliable data pipelines and infrastructure to support analytics, operations, and product use cases
  • Develop and evolve dbt models, semantic layers, and data marts that enable trustworthy, self-serve analytics across the business
  • Collaborate with non-technical stakeholders to deeply understand their business needs and translate them into well-defined metrics and analytical tools
  • Lead architectural decisions for our data platform, ensuring it is performant, maintainable, and aligned with future growth
  • Build and maintain data orchestration and transformation workflows using tools like Airflow, dbt, and Snowflake (or equivalent)
  • Champion data quality, documentation, and observability to ensure high trust in data across the organization
  • Mentor and guide other engineers and analysts, promoting best practices in both data engineering and analytics engineering disciplines
What we offer
What we offer
  • Employer-paid health insurance
  • 401k match with immediate vesting
  • Generous and flexible time off with 2 company-wide closure weeks
  • Taskrabbit product stipends
  • Wellness + productivity + education stipends
  • IKEA discounts
  • Reproductive health support
  • Fulltime
Read More
Arrow Right

Software Engineer Sr Staff - Platforms Developer

Designs, develops, troubleshoots and debugs software programs for software enhan...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or master’s degree in computer science, electronics, telecommunication engineering, or a related discipline
  • 14 to 19 years of experience in networking and system software development
  • Proficiency in C and C++ programming
  • Familiarity with data structures and system debugging techniques
  • Expertise in Host Complex, System Peripherals & Drivers: CPU complex (x86)
  • PCIe, SPI, I2C, MDIO
  • FPGA, CPLD, Flash Drivers
  • Expertise in Ethernet Interfaces (ranging from 1Gig to 400G+, including 800G, 1.6T), MacSec, Timing, Optics (SFP, QSFP, QDD, OSFP)
  • Expertise in High-speed packet forwarding with network processors, PHYs, and SerDes
  • Cloud Architectures
Job Responsibility
Job Responsibility
  • Collaborate with product managers, architects, and other engineers to define software requirements and specifications
  • Design, implement, and maintain networking and system software components using C and C++ programming languages
  • Conduct object-oriented analysis and design to ensure robust and scalable solutions
  • Debug complex system-level issues, leveraging your deep understanding of fundamental OS concepts (especially in Linux or similar operating systems)
  • Participate in hardware and system-level design discussions, ensuring carrier-class software development
  • Work with Linux device drivers, system bring-up, and the Linux kernel
  • Navigate large codebases effectively
  • Apply strong technical, analytical, and problem-solving skills to enhance software performance and resilience
  • Utilize scripting technologies and modern DevOps practices
  • Collaborate with cross-functional teams, including networking, embedded platform software, and hardware experts
What we offer
What we offer
  • Health & Wellbeing
  • Personal & Professional Development
  • Unconditional Inclusion
  • Fulltime
Read More
Arrow Right

Data Architect

The Data Architect will be responsible for designing, developing, and implementi...
Location
Location
United States , Andover, Massachusetts
Salary
Salary:
155500.00 - 376000.00 USD / Year
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 15+ years of relevant experience in the industry delivering technical and business strategy at an advanced/strategist level
  • Bachelor's, Master's, or PhD degree in Computer Science, Information Systems, Engineering, or equivalent
  • Strong understanding of data architecture, cloud infrastructure, and data management technologies
  • Proven experience driving innovations in data solutions and the productization of advanced development activities
  • Must have a track record of architecting, building, and deploying mission-critical, highly distributed, data-centric applications and solutions
  • Experience with at least one major IaaS and/or PaaS technology (OpenStack, AWS, Azure, VMware, etc.), including defining and scripting full topologies
  • Must be able to work in a global, complex, and diverse environment
Job Responsibility
Job Responsibility
  • Designing, developing, and implementing robust data solutions to support business objectives
  • Collaborating with cloud and data architects to design and set standards for the HPE GreenLake Hybrid Cloud platform and data solution portfolio
  • Driving evaluation of data storage and integration solutions and conducting research on platform behavior under different workloads
  • Designing and implementing data pipelines
  • Optimizing data storage solutions
  • Establishing best practices for data integration and analysis
  • Leading advanced development teams building proof-of-concept implementations for data platforms and solutions
  • Acting as a cross-functional product and technical expert for hybrid cloud and data technologies
  • Providing consultation, design input, and feedback for product development and design reviews across multiple organizations
  • Guiding and mentoring less-experienced staff members
What we offer
What we offer
  • Comprehensive suite of benefits that supports physical, financial and emotional wellbeing
  • Specific programs catered to career goals
  • Unconditionally inclusive work culture that celebrates individual uniqueness
  • Flexibility to manage work and personal needs
  • Opportunities for professional growth and development
  • Fulltime
Read More
Arrow Right

Software Engineer Staff

This Software Engineer Staff will be engaged in data science-related research an...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Utilize analytical and programming skills and open-source systems, such as Apache Storm, Apache Spark, Elasticsearch, Cassandra, Graph DB etc. develop data processing pipeline required efficacy and latency
  • Require good knowledge and experience of the big data tool sets and techniques of distributed storage and computation engine
  • Require the experience to develop the reusable and highly scalable data processing component
  • Require good knowledge and experience to work with cloud based CICD tools and cloud devops teams to collect stats and create monitors for our data processing pipelines
  • Develop good quality python APIs to support micro services
  • Require the knowledge of APIs to various No SQL storage systems, Elasticsearch, Cassandra, and Redis, etc.
  • Good understanding Python Flask web service and be able to develop good quality code
  • Troubleshoot production environment and customer reported issues
  • Require the knowledge of the multi-cloud production environment
  • Require the agility to troubleshoot open-source data processing engine, such as Apache Spark, Apache Storm and Apache Flink
Job Responsibility
Job Responsibility
  • Designs, develops, troubleshoots and debugs software programs for software enhancements and new products
  • Develops software including operating systems, compilers, routers, networks, utilities, databases and Internet-related tools
  • Determines hardware compatibility and/or influences hardware design
  • Engaged in data science-related research and software application development and engineering duties related to our enterprise-grade Wi-Fi technology and autonomous platform to provide an unprecedented visibility into the user experience
  • Collaborate with other engineers and product managers to build the next generation of autonomous Wi-Fi networks leveraging big data and predictive models
  • Use knowledge of wireless communication networks, machine learning and software engineering to develop and implement scalable algorithms to process a large amount of streaming data to detect anomalies, predict problems, and classify them in real-time
  • Leverage the data collected from the Wi-Fi network to empower the inference engine of our Mist platform and systems, including the Mist virtual assistant chat bot
  • Determine the likelihood of failures across the Wi-Fi network and performing failure scope analysis
What we offer
What we offer
  • Health & Wellbeing
  • Personal & Professional Development
  • Unconditional Inclusion
  • Fulltime
Read More
Arrow Right

Data Platform Architect

We are seeking a hands-on Data Platform Architect to lead the design, implementa...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
sandisk.com Logo
Sandisk
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 12+ years of hands-on experience in data architecture, engineering, and analytics delivery
  • Proven success in building modern data platforms on cloud (AWS, Azure, GCP)
  • Deep knowledge of data lakehouse architectures (e.g., Databricks, Fabric)
  • Proficiency with Python, SQL, Spark, and orchestration frameworks
  • Experience with ETL/ELT tools (e.g., Informatica, Talend, Fivetran) and containerization (Docker, Kubernetes)
  • Strong background in Data Modeling (ERD, star/snowflake, canonical models)
  • Familiarity with REST APIs, GraphQL, and event-driven design
  • Demonstrated experience integrating AI/ML and GenAI components into data platforms
Job Responsibility
Job Responsibility
  • Define and continuously evolve the target data architecture across the stack—governance, engineering, modeling, lakehouse, AI/ML
  • Translate business and technical goals into scalable and resilient platform designs
  • Own and maintain architectural roadmaps, standards, and decision frameworks
  • Act as the bridge between architects, Business SME/Analysts, data engineers, and analytics teams to ensure alignment and compliance with platform standards
  • Design and implement modern ELT/ETL pipelines using tools like Spark, Python, SQL, Scala, and cloud-native components (e.g., Fivetran, Databricks, Snowflake, BigQuery)
  • Build and maintain Lakehouse platforms using Delta Lake, Iceberg, or equivalent technologies
  • Manage data ingestion from heterogeneous sources including ERP, CRM, IoT, and third-party APIs
  • Guide hands-on development of robust, reusable, and automated data flows
  • Implement and enforce data governance frameworks including data lineage, metadata management, and access controls
  • Partner with Data Stewards and Governance Analysts to catalog data domains, define entities, and ensure SOX compliance
  • Fulltime
Read More
Arrow Right

Senior SSE Data Engineer

Designs, develops, troubleshoots and debugs software programs for software enhan...
Location
Location
Israel , Tel Aviv
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's or Master's degree in Computer Science, Information Systems, or equivalent
  • Typically 6-10 years experience
  • Extensive experience with multiple software systems design tools and languages
  • Excellent analytical and problem solving skills
  • Experience in overall architecture of software systems for products and solutions
  • Designing and integrating software systems running on multiple platform types into overall architecture
  • Evaluating forms and processes for software systems testing and methodology, including writing and execution of test plans, debugging, and testing scripts and tools
  • Excellent written and verbal communication skills
  • mastery in English and local language
  • Ability to effectively communicate product architectures, design proposals and negotiate options at senior management levels
Job Responsibility
Job Responsibility
  • Leads multiple project teams of other software systems engineers and internal and outsourced development partners responsible for all stages of design and development for complex products and platforms, including solution design, analysis, coding, testing, and integration
  • Manages and expands relationships with internal and outsourced development partners on software systems design and development
  • Reviews and evaluates designs and project activities for compliance with systems design and development guidelines and standards
  • provides tangible feedback to improve product quality and mitigate failure risk
  • Provides domain-specific expertise and overall software systems leadership and perspective to cross-organization projects, programs, and activities
  • Drives innovation and integration of new technologies into projects and activities in the software systems design organization
  • Provides guidance and mentoring to less- experienced staff members
What we offer
What we offer
  • Health & Wellbeing
  • Personal & Professional Development
  • Unconditional Inclusion
  • Fulltime
Read More
Arrow Right

Adjunct lecturer

This is a contract teaching position for a bootcamp style programme. Selected ca...
Location
Location
Singapore
Salary
Salary:
Not provided
generation.org Logo
Generation UK & Ireland
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Alignment with Generation mission and values
  • Have an interest in working with disconnected communities, commit to and empathize with people of all ages
  • Work successfully in a fast-paced, start-up environment
  • Work independently with limited oversight and seek assistance when needed
  • Excellent verbal and written communication skills
  • Excellent organizational and time-management abilities
  • Fulfill the expectations of the instructor role inside and outside of the classroom
  • Follow the overarching structure and flow of a curriculum
  • Engage participants in active thinking and participation
  • Adapt his / her communication style to reflect and connect with the diverse experiences of participants such as delivering instruction that is rigorous, relevant, and appropriate for adults
Job Responsibility
Job Responsibility
  • Delivery of Training (60%) under the guidance of the Generation Curriculum & Instruction Manager
  • Follow the overarching structure and flow of your programme’s curriculum
  • Prepare for effective delivery by understanding and personalizing session plans prior to class
  • Engage participants in active thinking and participation
  • Deliver instruction that is rigorous, relevant, and appropriate for adults
  • Leverage subject matter expertise during synchronous delivery of online sessions
  • Guide students through asynchronous (independent) learning modules and help solidify understanding, clarify concepts through debriefs and synchronous moments
  • Differentiate instruction to meet individual learning needs and accommodate different learning styles
  • Provide in-the-moment feedback to learners to clarify misunderstandings and/or encourage critical thinking
  • Provide relevant and timely feedback to learners on formative and summative assessments, including student projects
  • Parttime
Read More
Arrow Right

Bigdata Support Lead Engineer

The Lead Data Analytics analyst is responsible for managing, maintaining, and op...
Location
Location
India , Bengaluru; Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10+ years of total IT experience
  • 5+ years of experience in supporting Hadoop (Cloudera)/big data technologies
  • 5+ years of experience in public cloud infrastructure (AWS or GCP)
  • Experience with Kubernetes and cloud-native technologies
  • Experience with all aspects of DevOps (source control, continuous integration, deployments, etc.)
  • Advanced knowledge of the Hadoop ecosystem and Big Data technologies
  • Hands-on experience with the Hadoop eco-system (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr)
  • Knowledge of troubleshooting techniques for Hive, Spark, YARN, Kafka, and HDFS
  • Advanced Linux system administration and scripting skills (Shell, Python)
  • Experience on designing and developing Data Pipelines for Data Ingestion or Transformation using Spark with Java or Scala or Python
Job Responsibility
Job Responsibility
  • Lead day to day operation and support for Cloudera Hadoop ecosystem components (HDFS, YARN, Hive, Impala, Spark, HBase, etc)
  • Troubleshoot issues related to data ingestion, job failures, performance degradation and service unavailability
  • Monitor cluster health using Cloudera Manager and respond to alerts, logs, and metrics
  • Collaborate with engineering teams to analyze root causes and implement preventive measures
  • Collaborate patching, service restarts, failovers and rolling restarts for cluster maintenance
  • Assist in user onboarding, access control and issues in accessing the cluster services
  • Contribute to documentation for knowledge base
  • Work on data recovery, replication, and backup support tasks
  • Responsible for moving all legacy workloads to cloud platform
  • Ability to research and assess open-source technologies, public cloud tech stack (AWS/GCP) components to recommend and integrate into the design and implementation
  • Fulltime
Read More
Arrow Right