CrawlJobs Logo

DataOps Engineer

hivex.tech Logo

Hivex

Location Icon

Location:

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Looking for DataOps Engineer to lead database performance management for SaaS health-tech company, that are helping people find affordable medicine. This leadership role means to be responsible for data operations of a fast growing SaaS product and data management environment with high performance and security requirements. We look for a passion for automation and observability, the ability to motivate and lead database performance engineering, and deep knowledge of database management and programming.

Job Responsibility:

  • Define and build automated a database performance engineering process and framework
  • Collect and manage deterministic, well-known, and representative test sets
  • Optimize database performance using configuration, best-practices, and effective models
  • Engage with developers to collaborate on requirements and performance engineering
  • Create and manage ETL processes
  • Detect and respond to operational and customer problems

Requirements:

  • 3.5+ years of professional database management, development, and/or DataOps experience in a SaaS product environment
  • Experience database performance engineering large scale systems through high growth
  • Experience leading data quality management activities
  • Ability to collaborate with Java and Python developers on best practices for database performance and data quality
  • Deep knowledge of database internals and best practices for transactional and analytical processing
  • Ability to problem-solve collaboratively and independently

Additional Information:

Job Posted:
December 09, 2025

Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for DataOps Engineer

Senior Data Engineer

Our Senior Data Engineers enable public sector organisations to embrace a data-d...
Location
Location
United Kingdom , Bristol; London; Manchester; Swansea
Salary
Salary:
60000.00 - 80000.00 GBP / Year
madetech.com Logo
Made Tech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Enthusiasm for learning and self-development
  • Proficiency in Git (inc. Github Actions) and able to explain the benefits of different branch strategies
  • Gathering and meeting the requirements of both clients and users on a data project
  • Strong experience in IaC and able to guide how one could deploy infrastructure into different environments
  • Owning the cloud infrastructure underpinning data systems through a DevOps approach
  • Knowledge of handling and transforming various data types (JSON, CSV, etc) with Apache Spark, Databricks or Hadoop
  • Good understanding of the possible architectures involved in modern data system design (e.g. Data Warehouse, Data Lakes and Data Meshes) and the different use cases for them
  • Ability to create data pipelines on a cloud environment and integrate error handling within these pipelines. With an understanding how to create reusable libraries to encourage uniformity of approach across multiple data pipelines.
  • Able to document and present an end-to-end diagram to explain a data processing system on a cloud environment, with some knowledge of how you would present diagrams (C4, UML etc.)
  • To provide guidance how one would implement a robust DevOps approach in a data project. Also would be able to talk about tools needed for DataOps in areas such as orchestration, data integration and data analytics.
Job Responsibility
Job Responsibility
  • Enable public sector organisations to embrace a data-driven approach by providing data platforms and services that are high-quality, cost-efficient, and tailored to clients’ needs
  • Develop, operate, and maintain these services
  • Provide maximum value to data consumers, including analysts, scientists, and business stakeholders
  • Play one or more roles according to our clients' needs
  • Support as a senior contributor for a project, focusing on both delivering engineering work as well as upskilling members of the client team
  • Play more of a technical architect role and work with the larger MadeTech team to identify growth opportunities within the account
  • Have a drive to deliver outcomes for users
  • Make sure that the wider context of a delivery is considered and maintain alignment between the operational and analytical aspects of the engineering solution
What we offer
What we offer
  • 30 days of paid annual leave + bank holidays
  • Flexible Parental Leave
  • Part time remote working for all our staff
  • Paid counselling as well as financial and legal advice
  • Flexible benefit platform which includes a Smart Tech scheme, Cycle to work scheme, and an individual benefits allowance which you can invest in a Health care cash plan or Pension plan
  • Optional social and wellbeing calendar of events
  • Fulltime
Read More
Arrow Right

Junior Data Infrastructure Engineer

As part of the Data Infrastructure team you will be supporting mission critical ...
Location
Location
United Kingdom , Brighton
Salary
Salary:
Not provided
brandwatch.com Logo
Brandwatch
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • An interest in how computer infrastructure actually works, and a passion for learning
  • Interest, and ideally production experience, running storage systems, eg. as part of a selfhosted service, a home lab or as part of academic studies
  • Experience with Linux systems administration, including experience of trouble shooting
  • Fluency with one or more scripting languages, ideally Bash or Python
  • Experience helping your peers
  • Pride in the quality of your work
Job Responsibility
Job Responsibility
  • Supporting mission critical big data platforms, to ensure they are fully performant, reliable, available and secure
  • Development of tooling and operational support for our platforms
  • Help with staging support
  • Join the team supporting the production systems
  • Take a full part in the life of the team
  • Start designing the infrastructure we run
Read More
Arrow Right

Azure Dataops Data Engineer – Ii

We are seeking an Azure DataOps Data Engineer – II with strong hands-on experien...
Location
Location
India , Gurgaon
Salary
Salary:
Not provided
rackspace.com Logo
Rackspace
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3–5 years of experience in Data Engineering / DataOps roles
  • Strong hands-on experience with: Azure Databricks (PySpark, Spark SQL, Delta Lake)
  • Azure Data Factory (ADF) – pipelines, triggers, parameters, monitoring
  • Azure Data Lake Storage (ADLS Gen2)
  • Good understanding of ETL/ELT frameworks, batch and incremental processing
  • Strong SQL skills for data analysis and troubleshooting
  • Experience with production support, incident management, and SLA-driven environments
  • Familiarity with monitoring tools (Azure Monitor, Log Analytics, alerts)
  • Understanding of Azure security concepts (RBAC, Managed Identity, Key Vault)
  • Willingness to work in a rotational shift / on-call support model as part of a global operations team
Job Responsibility
Job Responsibility
  • Support production data platforms, ensuring high availability, reliability, and performance
  • Monitor data pipelines and jobs, proactively identifying and resolving failures, performance issues, and data discrepancies
  • Perform root cause analysis (RCA) for incidents and implement preventive measures
  • Implement DataOps best practices including automation, monitoring, alerting, and operational dashboards
  • Collaborate with cross-functional teams to support reporting, analytics, and downstream consumption
  • Maintain documentation for pipelines, operational runbooks, and support procedures
  • Participate in on-call and rotational shift support, including weekends or night shifts as required
  • Fulltime
Read More
Arrow Right

Senior DataOps Engineer

Drive optimisations, upgrades and maintenance of a Kubernetes based data and mod...
Location
Location
Salary
Salary:
Not provided
sniconsulting.net Logo
SNI sp. z o.o.
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience as DataOps Engineer or similar role covering the most of required skills
  • Expertise in Cloud architecture and key technologies (Kubernetes, Airflow, Managed Airflow)
  • Expertise in modern development tools and practices (e.g. CI/CD, DevOps, Observability, Pair Programming, TDD)
  • Knowledge of infrastructure-as-code tools (CloudFormation)
  • Experience with databases (Database: Redshift)
  • Programming language (Python)
  • Expertise in choosing and applying design patterns.
  • Developing software with scale, security and reliability in mind.
  • Knowledge of software development principles, design patterns and best practices
  • Test Driven Development and testing practices
Job Responsibility
Job Responsibility
  • Drive optimisations, upgrades and maintenance of a Kubernetes based data and modelling platform
  • Supporting access management fielding questions around Airflow and minor feature enhancements
  • Assist with Migration of Data pipelines
  • Fulltime
Read More
Arrow Right
New

Staff Dataops Engineer

Your Impact We are looking for a Staff DataOps Engineer to join the Data and M...
Location
Location
France , Paris
Salary
Salary:
Not provided
doctolib.fr Logo
Doctolib
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience after graduation as a Staff Data Platform Engineer, Staff DataOps, Staff Site Reliability Engineer, or in a similar role, with a history of architecting and scaling robust data platforms
  • Extensive experience with Google Cloud Platform and a command of Kubernetes & Terraform for automated deployments, and you are an authority on implementing network and IAM security best practices
  • Deep technical proficiency in orchestrating data pipelines using Airflow or Dagster, deploying applications to the cloud, and leveraging modern data warehouses such as BigQuery
  • Highly skilled in programming with Python, and have a solid understanding of software development principles
  • Excellent troubleshooter who excels at diagnosing and fixing data infrastructure and identifying performance bottlenecks, and a strong communicator who can articulate complex technical concepts to both technical and non-technical audiences
Job Responsibility
Job Responsibility
  • Design and implement enterprise-scale data infrastructure strategies, conducting thorough impact and cost analysis for major technical decisions, and establishing architectural standards across the organization
  • Build and optimize complex, multi-region data pipelines handling petabyte-scale datasets, ensuring 99.9% reliability and implementing advanced monitoring and alerting systems
  • Lead cost analysis initiatives, identify optimization opportunities across our data stack, and implement solutions that reduce infrastructure spend while improving performance and reliability
  • Provide technical guidance to data engineers and cross-functional teams, conduct architecture reviews, and drive adoption of best practices in DataOps, security, and governance
  • Evaluate emerging technologies, conduct proof-of-concepts for new data tools and platforms, and lead the technical roadmap for data infrastructure modernization
What we offer
What we offer
  • Free comprehensive health insurance for you and your children
  • 25 days of paid vacation per year, plus up to 14 days of RTT
  • Free mental health and coaching services through our partner Moka.care
  • Work from abroad for up to 10 days per year thanks to our flexibility days policy
  • Lunch vouchers (Swile card) worth €8.50 per working day, with €4.50 covered by Doctolib
  • A subsidy from the work council to refund part of the membership to a sport club or a creative class
  • 50% reimbursement of your public transport subscription
  • Parent Care Program: receive one additional month of leave on top of the legal parental leave
  • For caregivers and workers with disabilities, a package including an adaptation of the remote policy, extra days off for medical reasons, and psychological support
  • Relocation support in case of international mobility
  • Fulltime
Read More
Arrow Right

Graduate Data Engineer

As a Graduate Data Engineer, you will build and maintain scalable data pipelines...
Location
Location
United Kingdom , Marlow
Salary
Salary:
Not provided
srgtalent.com Logo
SRG
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Degree in Computer Science, Engineering, Mathematics, or similar, or similar work experience
  • Up to 2 years of experience building data pipelines at work or through internships
  • Can write clear and reliable Python/PySpark code
  • Familiar with popular analytics tools (like pandas, numpy, matplotlib), big data frameworks (like Spark), and cloud services (like Palantir, AWS, Azure, or Google Cloud)
  • Deep understanding of data models, relational and non-relational databases, and how they are used to organize, store, and retrieve data efficiently for analytics and machine learning
  • Knowledge about software engineering methods, including DevOps, DataOps, or MLOps is a plus
  • Master's degree in engineering (such as AI/ML, Data Systems, Computer Science, Mathematics, Biotechnology, Physics), or minimum 2 years of relevant technology experience
  • Experience with Generative AI (GenAI) and agentic systems will be considered a strong plus
  • Have a proactive and adaptable mindset: willing to take initiative, learn new skills, and contribute to different aspects of a project as needed to drive solutions from start to finish, even beyond the formal job description
  • Show a strong ability to thrive in situations of ambiguity, taking initiative to create clarity for yourself and the team, and proactively driving progress even when details are uncertain or evolving
Job Responsibility
Job Responsibility
  • Build and maintain data pipelines, leveraging PySpark and/or Typescript within Foundry, to transform raw data into reliable, usable datasets
  • Assist in preparing and optimizing data pipelines to support machine learning and AI model development, ensuring datasets are clean, well-structured, and readily usable by Data Science teams
  • Support the integration and management of feature engineering processes and model outputs into Foundry's data ecosystem, helping enable scalable deployment and monitoring of AI/ML solutions
  • Engaged in gathering and translating stakeholder requirements for key data models and reporting, with a focus on Palantir Foundry workflows and tools
  • Participate in developing and refining dashboards and reports in Foundry to visualize key metrics and insights
  • Collaborate with Product, Engineering, and GTM teams to align data architecture and solutions, learning to support scalable, self-serve analytics across the organization
  • Have some prompt engineering experience with large language models, including writing and evaluating complex multi-step prompts
  • Continuously develop your understanding of the company's data landscape, including Palantir Foundry's ontology-driven approach and best practices for data management
Read More
Arrow Right

Data Analytics Engineer

SDG Group is expanding its global Data & Analytics practice and is seeking a mot...
Location
Location
Egypt , Cairo
Salary
Salary:
Not provided
sdggroup.com Logo
SDG
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in computer science, Engineering, Information Systems, or related field
  • Hands-on experience in DataOps / Data Engineering
  • Strong knowledge in Databricks OR Snowflake (one of them is mandatory)
  • Proficiency in Python and SQL
  • Experience with Azure data ecosystem (ADF, ADLS, Synapse, etc.)
  • Understanding of CI/CD practices and DevOps for data.
  • Knowledge of data modeling, orchestration frameworks, and monitoring tools
  • Strong analytical and troubleshooting skills
  • Eagerness to learn and grow in a global consulting environment
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable and reliable data pipelines following DataOps best practices
  • Work with modern cloud data stacks using Databricks (Spark, Delta Lake) or Snowflake (Snow pipe, tasks, streams)
  • Develop and optimize ETL/ELT workflows using Python, SQL, and orchestration tools
  • Work with Azure data services (ADF, ADLS, Azure SQL, Azure Functions)
  • Implement CI/CD practices using Azure DevOps or Git-based workflows
  • Ensure data quality, consistency, and governance across all delivered data solutions
  • Monitor and troubleshoot pipelines for performance and operational excellence
  • Collaborate with international teams, architects, and analytics consultants
  • Contribute to technical documentation and solution design assets
What we offer
What we offer
  • Remote working model aligned with international project needs
  • Opportunity to work on European and global engagements
  • Mentorship and growth paths within SDG Group
  • A dynamic, innovative, and collaborative environment
  • Access to world-class training and learning platforms
  • Fulltime
Read More
Arrow Right

DataOps Engineer

At Paymentology, we’re redefining what’s possible in the payments space. As the ...
Location
Location
Salary
Salary:
Not provided
paymentology.com Logo
Paymentology
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3-5 years of hands-on experience in DevOps, Platform Engineering, or DataOps roles
  • Experience supporting or contributing to data platforms or data infrastructure projects
  • Hands-on proficiency with Infrastructure as Code, particularly Terraform
  • Experience working with AWS or GCP and common cloud architecture patterns
  • Practical experience or strong understanding of Kubernetes and containerised workloads
  • Familiarity with observability tooling across monitoring, logging, metrics, and alerting
  • Strong scripting skills in Python, Bash, or GoLang to automate operational processes
  • Excellent problem-solving skills and the ability to work effectively in a collaborative, fully remote environment
  • A strong inclination to develop DataOps and MLOps knowledge and capabilities
Job Responsibility
Job Responsibility
  • Design and implement cloud infrastructure for a modern data platform using Infrastructure as Code, with a strong focus on scalability, security, and reliability
  • Build and maintain CI/CD pipelines that support data engineering workflows and infrastructure deployments
  • Implement and operate observability solutions including monitoring, logging, metrics, and alerting to ensure platform reliability and fast incident response
  • Collaborate closely with data engineers to translate platform and workflow requirements into robust infrastructure solutions
  • Apply best practices for availability, disaster recovery, and cost efficiency, while documenting infrastructure patterns and operational procedures
  • Fulltime
Read More
Arrow Right