CrawlJobs Logo

DataOps Engineer

hivex.tech Logo

Hivex

Location Icon

Location:

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Looking for DataOps Engineer to lead database performance management for SaaS health-tech company, that are helping people find affordable medicine. This leadership role means to be responsible for data operations of a fast growing SaaS product and data management environment with high performance and security requirements. We look for a passion for automation and observability, the ability to motivate and lead database performance engineering, and deep knowledge of database management and programming.

Job Responsibility:

  • Define and build automated a database performance engineering process and framework
  • Collect and manage deterministic, well-known, and representative test sets
  • Optimize database performance using configuration, best-practices, and effective models
  • Engage with developers to collaborate on requirements and performance engineering
  • Create and manage ETL processes
  • Detect and respond to operational and customer problems

Requirements:

  • 3.5+ years of professional database management, development, and/or DataOps experience in a SaaS product environment
  • Experience database performance engineering large scale systems through high growth
  • Experience leading data quality management activities
  • Ability to collaborate with Java and Python developers on best practices for database performance and data quality
  • Deep knowledge of database internals and best practices for transactional and analytical processing
  • Ability to problem-solve collaboratively and independently

Additional Information:

Job Posted:
December 09, 2025

Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for DataOps Engineer

Senior Data Engineer

Our Senior Data Engineers enable public sector organisations to embrace a data-d...
Location
Location
United Kingdom , Bristol; London; Manchester; Swansea
Salary
Salary:
60000.00 - 80000.00 GBP / Year
madetech.com Logo
Made Tech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Enthusiasm for learning and self-development
  • Proficiency in Git (inc. Github Actions) and able to explain the benefits of different branch strategies
  • Gathering and meeting the requirements of both clients and users on a data project
  • Strong experience in IaC and able to guide how one could deploy infrastructure into different environments
  • Owning the cloud infrastructure underpinning data systems through a DevOps approach
  • Knowledge of handling and transforming various data types (JSON, CSV, etc) with Apache Spark, Databricks or Hadoop
  • Good understanding of the possible architectures involved in modern data system design (e.g. Data Warehouse, Data Lakes and Data Meshes) and the different use cases for them
  • Ability to create data pipelines on a cloud environment and integrate error handling within these pipelines. With an understanding how to create reusable libraries to encourage uniformity of approach across multiple data pipelines.
  • Able to document and present an end-to-end diagram to explain a data processing system on a cloud environment, with some knowledge of how you would present diagrams (C4, UML etc.)
  • To provide guidance how one would implement a robust DevOps approach in a data project. Also would be able to talk about tools needed for DataOps in areas such as orchestration, data integration and data analytics.
Job Responsibility
Job Responsibility
  • Enable public sector organisations to embrace a data-driven approach by providing data platforms and services that are high-quality, cost-efficient, and tailored to clients’ needs
  • Develop, operate, and maintain these services
  • Provide maximum value to data consumers, including analysts, scientists, and business stakeholders
  • Play one or more roles according to our clients' needs
  • Support as a senior contributor for a project, focusing on both delivering engineering work as well as upskilling members of the client team
  • Play more of a technical architect role and work with the larger MadeTech team to identify growth opportunities within the account
  • Have a drive to deliver outcomes for users
  • Make sure that the wider context of a delivery is considered and maintain alignment between the operational and analytical aspects of the engineering solution
What we offer
What we offer
  • 30 days of paid annual leave + bank holidays
  • Flexible Parental Leave
  • Part time remote working for all our staff
  • Paid counselling as well as financial and legal advice
  • Flexible benefit platform which includes a Smart Tech scheme, Cycle to work scheme, and an individual benefits allowance which you can invest in a Health care cash plan or Pension plan
  • Optional social and wellbeing calendar of events
  • Fulltime
Read More
Arrow Right

Junior Data Infrastructure Engineer

As part of the Data Infrastructure team you will be supporting mission critical ...
Location
Location
United Kingdom , Brighton
Salary
Salary:
Not provided
brandwatch.com Logo
Brandwatch
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • An interest in how computer infrastructure actually works, and a passion for learning
  • Interest, and ideally production experience, running storage systems, eg. as part of a selfhosted service, a home lab or as part of academic studies
  • Experience with Linux systems administration, including experience of trouble shooting
  • Fluency with one or more scripting languages, ideally Bash or Python
  • Experience helping your peers
  • Pride in the quality of your work
Job Responsibility
Job Responsibility
  • Supporting mission critical big data platforms, to ensure they are fully performant, reliable, available and secure
  • Development of tooling and operational support for our platforms
  • Help with staging support
  • Join the team supporting the production systems
  • Take a full part in the life of the team
  • Start designing the infrastructure we run
Read More
Arrow Right

Lead Database Engineer

We are looking for a skilled and proactive DataOps Engineer with strong Database...
Location
Location
India , Chennai; Coimbatore; Madurai
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong knowledge of SQL Server architecture and internals
  • Hands-on experience in performance tuning, query optimization, and indexing strategies
  • Proficiency in T-SQL scripting and database automation
  • Proven ability in troubleshooting live database issues and bottlenecks
  • Hands-on with cloud database services (Azure SQL, AWS RDS)
  • Excellent communication and cross-functional collaboration skills
  • Strong analytical and problem-solving abilities
  • A Bachelor’s degree in Computer Science, Information Technology, or related field
Job Responsibility
Job Responsibility
  • Administer and manage SQL Server databases, including AlwaysOn Availability Groups (AAGs) and Failover Cluster Instances (FCIs)
  • Optimize database performance through advanced query tuning, indexing, and execution plans
  • Troubleshoot and resolve issues such as blocking, deadlocks, and slow queries
  • Automate DBA tasks using PowerShell, T-SQL, and modern DataOps practices
  • Monitor and manage cloud databases (Azure SQL, AWS RDS, etc.)
  • Collaborate with engineering teams to ensure high availability, scalability, and disaster recovery readiness
  • Implement database backup, restore, and DR strategies aligned with enterprise needs
What we offer
What we offer
  • Opportunity to work on cutting-edge database and DataOps practices
  • Collaborate with experts across data engineering, AI, and cloud technologies
  • Access to learning platforms, certifications, and mentoring
  • Leadership visibility and opportunities to grow within a fast-scaling team
  • Attractive performance-linked rewards
  • Fulltime
Read More
Arrow Right

Graduate Data Engineer

As a Graduate Data Engineer, you will build and maintain scalable data pipelines...
Location
Location
United Kingdom , Marlow
Salary
Salary:
Not provided
srgtalent.com Logo
SRG
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Degree in Computer Science, Engineering, Mathematics, or similar, or similar work experience
  • Up to 2 years of experience building data pipelines at work or through internships
  • Can write clear and reliable Python/PySpark code
  • Familiar with popular analytics tools (like pandas, numpy, matplotlib), big data frameworks (like Spark), and cloud services (like Palantir, AWS, Azure, or Google Cloud)
  • Deep understanding of data models, relational and non-relational databases, and how they are used to organize, store, and retrieve data efficiently for analytics and machine learning
  • Knowledge about software engineering methods, including DevOps, DataOps, or MLOps is a plus
  • Master's degree in engineering (such as AI/ML, Data Systems, Computer Science, Mathematics, Biotechnology, Physics), or minimum 2 years of relevant technology experience
  • Experience with Generative AI (GenAI) and agentic systems will be considered a strong plus
  • Have a proactive and adaptable mindset: willing to take initiative, learn new skills, and contribute to different aspects of a project as needed to drive solutions from start to finish, even beyond the formal job description
  • Show a strong ability to thrive in situations of ambiguity, taking initiative to create clarity for yourself and the team, and proactively driving progress even when details are uncertain or evolving
Job Responsibility
Job Responsibility
  • Build and maintain data pipelines, leveraging PySpark and/or Typescript within Foundry, to transform raw data into reliable, usable datasets
  • Assist in preparing and optimizing data pipelines to support machine learning and AI model development, ensuring datasets are clean, well-structured, and readily usable by Data Science teams
  • Support the integration and management of feature engineering processes and model outputs into Foundry's data ecosystem, helping enable scalable deployment and monitoring of AI/ML solutions
  • Engaged in gathering and translating stakeholder requirements for key data models and reporting, with a focus on Palantir Foundry workflows and tools
  • Participate in developing and refining dashboards and reports in Foundry to visualize key metrics and insights
  • Collaborate with Product, Engineering, and GTM teams to align data architecture and solutions, learning to support scalable, self-serve analytics across the organization
  • Have some prompt engineering experience with large language models, including writing and evaluating complex multi-step prompts
  • Continuously develop your understanding of the company's data landscape, including Palantir Foundry's ontology-driven approach and best practices for data management
Read More
Arrow Right

Data Analytics Engineer

SDG Group is expanding its global Data & Analytics practice and is seeking a mot...
Location
Location
Egypt , Cairo
Salary
Salary:
Not provided
sdggroup.com Logo
SDG
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in computer science, Engineering, Information Systems, or related field
  • Hands-on experience in DataOps / Data Engineering
  • Strong knowledge in Databricks OR Snowflake (one of them is mandatory)
  • Proficiency in Python and SQL
  • Experience with Azure data ecosystem (ADF, ADLS, Synapse, etc.)
  • Understanding of CI/CD practices and DevOps for data.
  • Knowledge of data modeling, orchestration frameworks, and monitoring tools
  • Strong analytical and troubleshooting skills
  • Eagerness to learn and grow in a global consulting environment
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable and reliable data pipelines following DataOps best practices
  • Work with modern cloud data stacks using Databricks (Spark, Delta Lake) or Snowflake (Snow pipe, tasks, streams)
  • Develop and optimize ETL/ELT workflows using Python, SQL, and orchestration tools
  • Work with Azure data services (ADF, ADLS, Azure SQL, Azure Functions)
  • Implement CI/CD practices using Azure DevOps or Git-based workflows
  • Ensure data quality, consistency, and governance across all delivered data solutions
  • Monitor and troubleshoot pipelines for performance and operational excellence
  • Collaborate with international teams, architects, and analytics consultants
  • Contribute to technical documentation and solution design assets
What we offer
What we offer
  • Remote working model aligned with international project needs
  • Opportunity to work on European and global engagements
  • Mentorship and growth paths within SDG Group
  • A dynamic, innovative, and collaborative environment
  • Access to world-class training and learning platforms
  • Fulltime
Read More
Arrow Right

Azure DataOps Lead

The Azure DataOps Lead will be responsible for leading the operational delivery,...
Location
Location
India , Gurgaon
Salary
Salary:
Not provided
rackspace.com Logo
Rackspace
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8–12 years of total IT experience with at least 3–5 years in Azure DataOps or Data Engineering leadership
  • Hands-on expertise with key Azure Data Services, including: Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, Azure SQL Database / SQL Managed Instance, Azure Data Lake Storage Gen2 (ADLS)
  • Strong understanding of DataOps concepts
  • Experience in monitoring and alerting using Log Analytics, Application Insights, and Azure Monitor
  • Working knowledge of incident management, RCA documentation, and operational reporting
  • Strong analytical skills for troubleshooting performance issues and identifying optimization opportunities
  • Excellent communication and stakeholder management skills across global teams
Job Responsibility
Job Responsibility
  • Lead and manage the Azure DataOps function, ensuring smooth daily operations, incident resolution, and performance stability across production data platforms
  • Oversee data pipeline orchestration and automation using Azure Data Factory (ADF), Synapse Analytics, Databricks, and Logic Apps
  • Implement CI/CD pipelines for data workflows using Azure DevOps or equivalent automation tools
  • Drive incident, problem, change, and request management processes aligned with ITIL best practices
  • Coordinate with L1/L2 support teams for escalations, RCA preparation, and client communication
  • Maintain governance for data quality, access control, and compliance using Azure Purview, Key Vault, and RBAC
  • Collaborate with Data Architects and Cloud Engineers to design scalable, resilient, and cost-efficient Azure data solutions
  • Ensure 24/7 operational readiness through proactive alert monitoring, performance tuning, and preventive maintenance
  • Contribute to automation initiatives using PowerShell, Python, or ARM templates to reduce manual efforts and improve system reliability
  • Partner with customer stakeholders to report on SLAs, KPIs, RCA summaries, and provide technical recommendations for improvement
  • Fulltime
Read More
Arrow Right

Azure DataOps Lead

The Azure DataOps Lead will be responsible for leading the operational delivery,...
Location
Location
India
Salary
Salary:
Not provided
rackspace.com Logo
Rackspace
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8–12 years of total IT experience with at least 3–5 years in Azure DataOps or Data Engineering leadership
  • Hands-on expertise with key Azure Data Services, including: Azure Data Factory (ADF)
  • Azure Synapse Analytics
  • Azure Databricks
  • Azure SQL Database / SQL Managed Instance
  • Azure Data Lake Storage Gen2 (ADLS)
  • Strong understanding of DataOps concepts
  • Experience in monitoring and alerting using Log Analytics, Application Insights, and Azure Monitor
  • Working knowledge of incident management, RCA documentation, and operational reporting
  • Strong analytical skills for troubleshooting performance issues and identifying optimization opportunities
Job Responsibility
Job Responsibility
  • Lead and manage the Azure DataOps function, ensuring smooth daily operations, incident resolution, and performance stability across production data platforms
  • Oversee data pipeline orchestration and automation using Azure Data Factory (ADF), Synapse Analytics, Databricks, and Logic Apps
  • Implement CI/CD pipelines for data workflows using Azure DevOps or equivalent automation tools
  • Drive incident, problem, change, and request management processes aligned with ITIL best practices
  • Coordinate with L1/L2 support teams for escalations, RCA preparation, and client communication
  • Maintain governance for data quality, access control, and compliance using Azure Purview, Key Vault, and RBAC
  • Collaborate with Data Architects and Cloud Engineers to design scalable, resilient, and cost-efficient Azure data solutions
  • Ensure 24/7 operational readiness through proactive alert monitoring, performance tuning, and preventive maintenance
  • Contribute to automation initiatives using PowerShell, Python, or ARM templates to reduce manual efforts and improve system reliability
  • Partner with customer stakeholders to report on SLAs, KPIs, RCA summaries, and provide technical recommendations for improvement
Read More
Arrow Right

Sales Engineer

As a Sales Engineer at Astronomer, you’ll be a key partner to our clients, guidi...
Location
Location
United States , San Francisco
Salary
Salary:
200000.00 - 250000.00 USD / Year
astronomer.io Logo
Astronomer
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Data Engineering Know-How: Familiarity with core data engineering concepts including orchestration, ELT, Git, and Role-Based Access Control (RBAC), with hands-on experience or working knowledge of Apache Airflow in a customer environment
  • Experience and Expertise: 2+ years in a Sales Engineering, Solutions Engineering, Consulting or similar role within the data space, ideally with experience in modern data tools like Snowflake, Databricks, Fivetran, or Tableau
  • Effective Communication: Strong verbal and written communication skills to simplify complex technical concepts for diverse audiences
  • Curiosity and Customer Empathy: A genuine desire to understand customer needs, patience, and empathy to support them through challenges
  • Drive to Innovate: Eagerness to learn and experiment with new technical concepts, tools, and approaches to stay ahead in the data industry
Job Responsibility
Job Responsibility
  • Solve Real-World Problems: Design and implement proof-of-concept solutions that help customers tackle real data challenges, from concept to production
  • Be a Trusted Advisor: Conduct demos and provide technical guidance to engineering teams, showing them how our platform can transform their workflows
  • Drive Community Impact: Contribute to the Apache Airflow community by creating technical content and best practices, positioning Astronomer as a thought leader in workflow orchestration
  • Influence Product Direction: Act as a liaison by gathering field insights and providing critical feedback to the Product team to shape the future of our platform
  • Develop and Grow: Become an expert in Airflow, workflow orchestration, and the data engineering landscape as you collaborate across departments and work on impactful projects
What we offer
What we offer
  • equity component
  • Fulltime
Read More
Arrow Right