CrawlJobs Logo

Data Ops Engineer

realign-llc.com Logo

Realign

Location Icon

Location:
United States , Jersey City

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

130000.00 USD / Year

Job Description:

Job Description

Job Responsibility:

  • Monitor and troubleshoot data ingestion/transformation workflows across hybrid environments
  • Manage job schedules, track failures, and implement fixes to ensure smooth pipeline execution
  • Automate repetitive operational tasks and contribute to CI/CD improvements
  • Collaborate with platform engineers, release managers, and global teams for stable releases
  • Perform RCA, maintain SLAs, and continuously optimize performance
  • Maintain operational documentation, runbooks, and health-check procedures

Requirements:

  • Strong hands-on experience with HDFS, Spark, Hive/Impala, and Python-based data processing
  • Expertise in debugging ingestion failures, pipeline issues, and performance bottlenecks
  • Knowledge of workflow orchestrators such as Airflow, Oozie, ADF
  • Experience with Git, artifact management, and CI/CD tools (Azure DevOps/Jenkins)
  • Incident management: triage, RCA, remediation
  • ability to handle on-call rotation
  • Monitoring using Cloudera Manager, logs, and metrics with focus on SLA/MTTR
  • Strong Shell scripting skills and automation of operational tasks
  • Basic understanding of Azure/Databricks jobs, clusters, storage
  • Strong analytical thinking and structured problem-solving
  • Ability to document runbooks and operational playbooks clearly
  • Effective communication with cross-functional teams including release management and offshore teams
  • Good understanding of production governance and stability requirements

Additional Information:

Job Posted:
March 21, 2026

Employment Type:
Fulltime
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Ops Engineer

Data Infrastructure Engineer

A venture-backed startup at the intersection of AI and national security is buil...
Location
Location
United States , New York City Metropolitan Area
Salary
Salary:
Not provided
weareorbis.com Logo
Orbis Consultants
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong engineering experience in Python, Go, or C
  • Experience building and scaling production data systems
  • Hands-on expertise with model deployment and ML Ops practices
  • Knowledge of database design, performance tuning, and operations
  • Someone who thrives in early-stage, fast-paced environments and enjoys tackling complex challenges
Job Responsibility
Job Responsibility
  • Build and maintain the data pipelines and infrastructure that power ML applications
  • Deploy and manage models at scale, from training through production
  • Design APIs and services that integrate smoothly into mission-critical workflows
  • Ensure data is handled and secured properly across large, distributed environments
  • Collaborate closely with a small, fast-moving team to solve hard technical problems in real-world settings
What we offer
What we offer
  • Significant equity
  • Strong health & wellness benefits
  • Fulltime
Read More
Arrow Right

Lead Data Engineer

Lead Data Engineer to serve as both a technical leader and people coach for our ...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or master’s degree in computer science, Engineering, or related field
  • 8-10 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark
  • Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse
  • Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices
  • Solid grasp of data governance, metadata tagging, and role-based access control
  • Proven ability to mentor and grow engineers in a matrixed or global environment
  • Strong verbal and written communication skills, with the ability to operate cross-functionally
  • Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management)
  • Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools
  • Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance)
Job Responsibility
Job Responsibility
  • Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms
  • Lead the technical execution of non-domain specific initiatives (e.g. reusable dimensions, TLOG standardization, enablement pipelines)
  • Architect data models and re-usable layers consumed by multiple downstream pods
  • Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks
  • Mentoring and coaching team
  • Partner with product and platform leaders to ensure engineering consistency and delivery excellence
  • Act as an L3 escalation point for operational data issues impacting foundational pipelines
  • Own engineering best practices, sprint planning, and quality across the Enablement pod
  • Contribute to platform discussions and architectural decisions across regions
  • Fulltime
Read More
Arrow Right

Lead Data Engineer

Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader i...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or master’s degree in computer science, Engineering, or related field
  • 7-9 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark
  • Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse
  • Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices
  • Solid grasp of data governance, metadata tagging, and role-based access control
  • Proven ability to mentor and grow engineers in a matrixed or global environment
  • Strong verbal and written communication skills, with the ability to operate cross-functionally
  • Certifications in Azure, Databricks, or Snowflake are a plus
  • Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management)
  • Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools
Job Responsibility
Job Responsibility
  • Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms
  • Lead the technical execution of non-domain specific initiatives (e.g. reusable dimensions, TLOG standardization, enablement pipelines)
  • Architect data models and re-usable layers consumed by multiple downstream pods
  • Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks
  • Mentoring and coaching team
  • Partner with product and platform leaders to ensure engineering consistency and delivery excellence
  • Act as an L3 escalation point for operational data issues impacting foundational pipelines
  • Own engineering best practices, sprint planning, and quality across the Enablement pod
  • Contribute to platform discussions and architectural decisions across regions
  • Fulltime
Read More
Arrow Right

Data Ops Capability Deployment - Analyst

Data Ops Capability Deployment - Analyst is a seasoned professional role. Applie...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10 + years of active development background and experience in Financial Services or Finance IT
  • Experience with Data Quality/Data Tracing/Data Lineage/Metadata Management Tools
  • Hands on experience for ETL using PySpark on distributed platforms along with data ingestion, Spark optimization, resource utilization, capacity planning & batch orchestration
  • In depth understanding of Hive, HDFS, Airflow, job scheduler
  • Strong programming skills in Python with experience in data manipulation and analysis libraries (Pandas, Numpy)
  • Should be able to write complex SQL/Stored Procs
  • Should have worked on DevOps, Jenkins/Lightspeed, Git, CoPilot
  • Strong knowledge in one or more of the BI visualization tools such as Tableau, PowerBI
  • Proven experience in implementing Datalake/Datawarehouse for enterprise use cases
  • Exposure to analytical tools and AI/ML is desired
Job Responsibility
Job Responsibility
  • Hands on with data engineering background and have thorough understanding of Distributed Data platforms and Cloud services
  • Sound understanding of data architecture and data integration with enterprise applications
  • Research and evaluate new data technologies, data mesh architecture and self-service data platforms
  • Work closely with Enterprise Architecture Team on the definition and refinement of overall data strategy
  • Should be able to address performance bottlenecks, design batch orchestrations, and deliver Reporting capabilities
  • Ability to perform complex data analytics (data cleansing, transformation, joins, aggregation etc.) on large complex datasets
  • Build analytics dashboards & data science capabilities for Enterprise Data platforms
  • Communicate complicated findings and propose solutions to a variety of stakeholders
  • Understanding business and functional requirements provided by business analysts and convert into technical design documents
  • Work closely with cross-functional teams e.g. Business Analysis, Product Assurance, Platforms and Infrastructure, Business Office, Control and Production Support
  • Fulltime
Read More
Arrow Right

Data Engineer

We’re looking for a hands-on Data Engineer with 2–5 years of experience to build...
Location
Location
Sri Lanka
Salary
Salary:
Not provided
iqzsystems.com Logo
IQZ Systems
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2+ years of experience
  • Solid Python (pandas, PySpark or data frameworks)
  • modular, testable code
  • Strong SQL across analytical databases/warehouses (e.g., Snowflake/BigQuery/Redshift/Azure Synapse)
  • Experience building production-grade pipelines and transformations
  • Exposure to at least one cloud (AWS/Azure/GCP/Databricks) for data storage and compute
  • Hands-on with Spark (PySpark) or equivalent distributed processing
  • Airflow or Prefect (DAGs, schedules, sensors, retries, SLAs)
  • Git workflows
  • basic CI for data jobs
Job Responsibility
Job Responsibility
  • Build Pipelines: Develop, test, and deploy scalable ETL/ELT pipelines for batch and streaming use cases
  • Model Data: Design clean, query-optimized data models (star schema, SCD, slowly changing logic as needed)
  • SQL Excellence: Author performant SQL for transformations, materializations, and reports
  • Orchestrate Workflows: Implement DAGs/workflows with Airflow/Prefect
  • maintain SLAs and retries
  • Data Quality: Add validation checks, schema enforcement, and alerting (e.g., Great Expectations)
  • Performance & Cost: Tune Spark/warehouse queries, optimize storage formats/partitions, and control costs
  • Collaboration: Work with Analytics, Data Science, and Product to translate requirements into data models
  • Ops & Reliability: Monitor pipelines, debug failures, and improve observability and documentation
  • Security & Compliance: Handle data responsibly (PII), follow RBAC/least privilege, and secrets management
What we offer
What we offer
  • A dynamic and collaborative work environment
  • Opportunities for professional growth and development
  • Competitive compensation and benefits
  • The chance to shape impactful products that solve real-world problems
  • Exposure to cutting-edge technologies and tools, with opportunities to innovate and explore new business solutions
  • Fulltime
Read More
Arrow Right

Mechanical Engineering Co-op

As a Mechanical Engineering Co-op for Boston Engineering, you will have the oppo...
Location
Location
United States , Waltham
Salary
Salary:
Not provided
boston-engineering.com Logo
Boston Engineering
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Prior engineering (or similar) internship or coop experience
  • Working understanding of mechanical engineering concepts: statics, dynamics, stress, materials, machine design, etc.
  • Working understanding of mechanical design concepts: FBD’s, stress analysis, component tolerances, engineering drawings, etc.
  • Experience using CAD software, SolidWorks and/or Creo preferred
  • Basic hands-on experience: hand tools, machine assembly, debugging mechanical systems, etc.
  • Ability to work independently and as a part of a team
  • Good communication, technical writing, and documentation skills
  • Time management and organization of multiple tasks
Job Responsibility
Job Responsibility
  • Assisting with design tasks (CAD development, analysis, component selection)
  • Participating in brainstorm discussion and concept development
  • Hands-on prototype development, rework, and assembly
  • Design and assembly of test equipment
  • Test implementation, data analysis, and technical documentation
  • Presenting to interdisciplinary internal and client teams
What we offer
What we offer
  • Mentorship program guided by a mentor interested in your success
  • Training courses and seminars on engineering concepts and skills
  • Exposure to a wide range of industries, disciplines, companies, and more
Read More
Arrow Right

Data Infrastructure Engineer

Data Infrastructure Engineer – New York or DC (hybrid) – Competitive Salary + Eq...
Location
Location
United States , New York or DC
Salary
Salary:
Not provided
weareorbis.com Logo
Orbis Consultants
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Startup Energy: You thrive in fast-paced environments, manage ambiguity well, and focus on what moves the needle
  • Designing and deploying intuitive, user-friendly APIs
  • Demonstrated ability to train and deploy models at scale
  • Successfully launching machine learning services, particularly those leveraging LLMs, embeddings, and inference, into production environments
  • Handling and securing large-scale production data
  • Demonstrated proficiency in Python, Go, or C
  • A proactive approach to tackling complex challenges in a fast-paced, early-stage environment
  • A passion for innovation and a collaborative spirit
Job Responsibility
Job Responsibility
  • Joining as part of the founding Engineering team, you will be a key part of developing secure data sharing middleware
  • Their software will integrate seamlessly into the workflows of specialized professionals, ensuring secure and efficient data access throughout the asset recruitment process
  • The data infrastructure engineer requires a mix of software development and ML Ops practices, resulting in an exciting, fast paced engineering role
  • You will be able to demonstrate experience building, shipping and supporting mission critical services in support of the services that make up the Data platform
  • This role requires the ability to provide solutions for the full data stack – from the data management, software development and model and deployment lifecycles
What we offer
What we offer
  • Competitive Salary + Equity
  • Fulltime
Read More
Arrow Right

Data Engineer

We’re looking for a hands-on Data Engineer with 2–5 years of experience to build...
Location
Location
India
Salary
Salary:
Not provided
iqzsystems.com Logo
IQZ Systems
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2+ years of experience
  • Solid Python (pandas, PySpark or data frameworks)
  • modular, testable code
  • Strong SQL across analytical databases/warehouses (e.g., Snowflake/BigQuery/Redshift/Azure Synapse)
  • Experience building production-grade pipelines and transformations
  • Exposure to at least one cloud (AWS/Azure/GCP/Databricks) for data storage and compute
  • Hands-on with Spark (PySpark) or equivalent distributed processing
  • Airflow or Prefect (DAGs, schedules, sensors, retries, SLAs)
  • Git workflows
  • basic CI for data jobs
Job Responsibility
Job Responsibility
  • Build Pipelines: Develop, test, and deploy scalable ETL/ELT pipelines for batch and streaming use cases
  • Model Data: Design clean, query-optimized data models (star schema, SCD, slowly changing logic as needed)
  • SQL Excellence: Author performant SQL for transformations, materializations, and reports
  • Orchestrate Workflows: Implement DAGs/workflows with Airflow/Prefect
  • maintain SLAs and retries
  • Data Quality: Add validation checks, schema enforcement, and alerting (e.g., Great Expectations)
  • Performance & Cost: Tune Spark/warehouse queries, optimize storage formats/partitions, and control costs
  • Collaboration: Work with Analytics, Data Science, and Product to translate requirements into data models
  • Ops & Reliability: Monitor pipelines, debug failures, and improve observability and documentation
  • Security & Compliance: Handle data responsibly (PII), follow RBAC/least privilege, and secrets management
  • Fulltime
Read More
Arrow Right