CrawlJobs Logo

DevOps Engineer - Data Platforms

alterdomus.com Logo

Alter Domus

Location Icon

Location:
India , Hyderabad

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We are seeking an experienced and motivated DevOps Engineer to join our Data Platforms team. This role focuses on building and maintaining scalable infrastructure for data engineering workflows using AWS-native services and Infrastructure as Code (IaC) practices. The ideal candidate will have strong expertise in Terraform, AWS services, and automation for data platforms, along with solid data engineering skills.

Job Responsibility:

  • Design, implement, and manage Infrastructure as Code (IaC) using Terraform for AWS-native data applications (e.g., S3, Glue, EMR, Lambda)
  • Collaborate with data engineers and platform teams to automate deployments and optimize data pipelines
  • Develop and maintain CI/CD pipelines for infrastructure and data workflows
  • Ensure security, scalability, and reliability of data platform infrastructure
  • Troubleshoot and resolve issues related to infrastructure and network communication
  • Work closely with stakeholders to understand requirements and deliver robust solutions

Requirements:

  • 3+ years of DevOps experience in cloud-based environments
  • Strong experience with Terraform for AWS infrastructure automation
  • Hands-on expertise with AWS services commonly used in data engineering (S3, Glue, EMR, Redshift, Lambda)
  • Proficiency in SQL and Python for data engineering tasks and automation
  • Experience with Kubernetes, Shell scripting, and Docker
  • Comfortable working with both Linux and Windows environments
  • Excellent problem-solving and collaboration skills

Nice to have:

  • Familiarity with CI/CD tools (e.g., GitHub Actions, Jenkins)
  • Knowledge of containerization (Docker, Kubernetes)
  • Experience with Backstage or developer experience platforms
  • DevOps certifications (e.g., AWS Certified DevOps Engineer, Terraform)
What we offer:
  • Support for professional accreditations
  • Flexible arrangements, generous holidays, plus an additional day off for your birthday
  • Continuous mentoring along your career progression
  • Active sports, events and social committees across our offices
  • 24/7 support available from our Employee Assistance Program
  • The opportunity to invest in our growth and success through our Employee Share Plan
  • Plus additional local benefits depending on your location

Additional Information:

Job Posted:
January 11, 2026

Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for DevOps Engineer - Data Platforms

Cloud Technical Architect / Data DevOps Engineer

The role involves designing, implementing, and optimizing scalable Big Data and ...
Location
Location
United Kingdom , Bristol
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • An organised and methodical approach
  • Excellent time keeping and task prioritisation skills
  • An ability to provide clear and concise updates
  • An ability to convey technical concepts to all levels of audience
  • Data engineering skills – ETL/ELT
  • Technical implementation skills – application of industry best practices & designs patterns
  • Technical advisory skills – experience in researching technological products / services with the intent to provide advice on system improvements
  • Experience of working in hybrid environments with both classical and DevOps
  • Excellent written & spoken English skills
  • Excellent knowledge of Linux operating system administration and implementation
Job Responsibility
Job Responsibility
  • Detailed development and implementation of scalable clustered Big Data solutions, with a specific focus on automated dynamic scaling, self-healing systems
  • Participating in the full lifecycle of data solution development, from requirements engineering through to continuous optimisation engineering and all the typical activities in between
  • Providing technical thought-leadership and advisory on technologies and processes at the core of the data domain, as well as data domain adjacent technologies
  • Engaging and collaborating with both internal and external teams and be a confident participant as well as a leader
  • Assisting with solution improvement activities driven either by the project or service
  • Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation
  • Cloud Engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code
  • Provide technical challenge and assurance throughout development and delivery of work
  • Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership
  • Work independently and/or within a team using a DevOps way of working
What we offer
What we offer
  • Extensive social benefits
  • Flexible working hours
  • Competitive salary
  • Shared values
  • Equal opportunities
  • Work-life balance
  • Evolving career opportunities
  • Comprehensive suite of benefits that supports physical, financial and emotional wellbeing
  • Fulltime
Read More
Arrow Right

Data (DevOps) Engineer

Ivy Partners is a Swiss consulting firm dedicated to helping businesses navigate...
Location
Location
Switzerland , Genève
Salary
Salary:
Not provided
ivy.partners Logo
IVY Partners
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Substantial experience with Apache Airflow in complex orchestration and production settings
  • Advanced skills in AWS, Databricks, and Python for data pipelines, MLOps tooling, and automation
  • Proven experience deploying volumetric and sensitive pipelines
  • Confirmed to senior level
  • Highly autonomous, capable of working in a critical and structuring environment
  • Not just a team player but someone who challenges the status quo, proposes solutions, and elevates the team
  • Communicate clearly and possess a strong sense of business urgency
Job Responsibility
Job Responsibility
  • Design and maintain high-performance data pipelines
  • Migrate large volumes of historical and operational data to AWS
  • Optimize data flows used by machine learning models for feature creation, time series, and trade signals
  • Ensure the quality, availability, and traceability of critical datasets
  • Collaborate directly with data scientists to integrate, monitor, and industrialize models: price prediction models, optimization algorithms, and automated trading systems
  • Support model execution and stability in production environments utilizing Airflow and Databricks
  • Build, optimize, and monitor Airflow DAGs
  • Automate Databricks jobs and integrate CI/CD pipelines (GitLab/Jenkins)
  • Monitor the performance of pipelines and models, and address incidents
  • Deploy robust, secure, and scalable AWS data architectures
What we offer
What we offer
  • Supportive environment where everyone is valued, with training and career advancement opportunities
  • Building a relationship based on transparency, professionalism, and commitment
  • Encouraging innovation
  • Taking responsibility
Read More
Arrow Right

Data Platform Engineer

Data Platform Engineers at Adyen build the foundational layer of tooling and pro...
Location
Location
Netherlands , Amsterdam
Salary
Salary:
Not provided
adyen.com Logo
Adyen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Fluency in Python
  • Experience developing and maintaining distributed data and compute systems like Spark, Trino, Druid, etc
  • Experience developing and maintaining DevOps pipelines and development ecosystems
  • Experience developing and maintaining real-time and batch data pipelines (via Kafka, Spark streaming)
  • Experience with Kubernetes ecosystem (k8s, docker), and/or Hadoop ecosystems (Hive, Yarn, HDFS, Kerberos)
  • Team player with strong communication skills
  • Ability to work closely with diverse stakeholders
Job Responsibility
Job Responsibility
  • Develop and maintain scalable and high performance big data platforms
  • Work with distributed systems in all shapes and flavors (databases, filesystems, compute, etc.)
  • Identify opportunities to improve continuous release and deployment environments
  • Build or deploy tools to enhance data discoverability through the collection and presentation of metadata
  • Introduce and extend tools to enhance the quality of our data, platform-wide
  • Explore and introduce technologies and practices to reduce the time to insight for analysts and data scientists
  • Develop streaming processing applications and frameworks
  • Build the foundational layer of tooling and processes for on-premise Big Data Platforms
  • Collaborate with data engineers and ML scientists and engineers to build and roll-out tools
  • Develop and operate multiple big data platforms
Read More
Arrow Right

Fixed Income Data Python Platform Engineer

Fixed Income Data Python Platform Engineer position at Citi, building and mainta...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5 years+ of hands-on experience in building an enterprise scale highly componentized application using Python incl. web frameworks like Django or Flask and data science tools - Pandas, Polars, Streamlit, Airflow
  • Experience with Docker and Kubernetes/Openshift
  • Experience with DevOps technologies - Ansible, Chef
  • Experience working in a Continuous Integration and Continuous Delivery environment and familiar with Jenkins, TeamCity, Code Quality Tools - SonarQube, etc.
  • Proficient in industry standard best practices such as Design Patters, Coding Standards, Coding modularity, Prototypes etc.
  • Unit testing frameworks - PyTest
  • Understanding of the SDLC lifecycle for Agile & Waterfall methodologies
  • Excellent written and oral communication skills
  • Bachelor's degree/University degree or equivalent experience
Job Responsibility
Job Responsibility
  • Analyzes system requirements, including identifying program interactions and appropriate interfaces between impacted components and sub systems
  • Participate in Sprint Planning, Tasking and Estimation of the assigned work for Python platform
  • Participate in component and service design for Python analytical services
  • Work on bug resolution and application improvements, such as performance and maintainability
  • May occasionally work a non-standard shift including nights and/or weekends and/or have on-call responsibilities
  • Stay abreast with new trends in open source tooling and champion tools that could help improve efficiency of the Fixed Income quant and data science community
  • Work closely with quants and data scientists to help them use platform capabilities and develop efficient analytical tools
  • Continuously look to automate manual touchpoints in the technology delivery pipeline
  • Fulltime
Read More
Arrow Right

Senior Software Engineer – DevOps Platform

We’re looking for a Senior Software Engineer to join our Devops team, where you ...
Location
Location
United States , Palo Alto; New York City
Salary
Salary:
172000.00 - 228000.00 USD / Year
wealthfront.com Logo
Wealthfront
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Extensive experience with running and troubleshooting modern Linux systems and services in production
  • 6+ years of experience developing reliable production-grade software in Java, Go, or other similar languages
  • Proficiency with at least one automation technology such as Terraform, Chef, or Puppet
  • Successfully designed and deployed mission-critical complex distributed systems
  • Excellent critical thinking and communications skills with a desire to both learn from and educate your peers
  • A BS or MS in Computer Science or an Engineering field, or equivalent professional experience
Job Responsibility
Job Responsibility
  • Maintain our core infrastructure by writing software to automate application deployment, configure our infrastructure, and manage critical services such as our databases
  • Ensure that mission critical services operate reliably by triaging and fixing operational issues as an on-call engineer, participating in post-mortems, and implementing improvements to prevent future issues
  • Design, implement, and deploy internal tools and services to accelerate productivity of the wider Engineering team and enable direct ownership of operations
  • Help manage our server hardware in our physical data centers which may occasionally include travel to our Bay Area or New Jersey data centers for onsite projects
  • Be involved in key decisions regarding the evolution of our infrastructure
  • Mentor junior members of the team
What we offer
What we offer
  • medical
  • vision
  • dental
  • 401K plan
  • generous time off
  • parental leave
  • wellness reimbursements
  • professional development
  • employee investing discount
  • Fulltime
Read More
Arrow Right

Data Engineer

The Data Engineer is accountable for developing high quality data products to su...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • First Class Degree in Engineering/Technology/MCA
  • 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies
  • Experience of relational databases and using SQL for data querying, transformation and manipulation
  • Experience of modelling data for analytical consumers
  • Ability to automate and streamline the build, test and deployment of data pipelines
  • Experience in cloud native technologies and patterns
  • A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training
  • Excellent communication and problem-solving skills
  • ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica
  • Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing
Job Responsibility
Job Responsibility
  • Developing and supporting scalable, extensible, and highly available data solutions
  • Deliver on critical business priorities while ensuring alignment with the wider architectural vision
  • Identify and help address potential risks in the data supply chain
  • Follow and contribute to technical standards
  • Design and develop analytical data models
  • Fulltime
Read More
Arrow Right

Data Engineer

The Data Engineer is accountable for developing high quality data products to su...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • First Class Degree in Engineering/Technology/MCA
  • 3 to 4 years’ experience implementing data-intensive solutions using agile methodologies
  • Experience of relational databases and using SQL for data querying, transformation and manipulation
  • Experience of modelling data for analytical consumers
  • Ability to automate and streamline the build, test and deployment of data pipelines
  • Experience in cloud native technologies and patterns
  • A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training
  • Excellent communication and problem-solving skills
  • ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica
  • Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing
Job Responsibility
Job Responsibility
  • Developing and supporting scalable, extensible, and highly available data solutions
  • Deliver on critical business priorities while ensuring alignment with the wider architectural vision
  • Identify and help address potential risks in the data supply chain
  • Follow and contribute to technical standards
  • Design and develop analytical data models
  • Fulltime
Read More
Arrow Right

DevOps Engineer

BioCatch is the leader in Behavioral Biometrics, a technology that leverages mac...
Location
Location
Israel , TLV
Salary
Salary:
Not provided
biocatch.com Logo
BioCatch
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience: Demonstrated experience as a DevOps professional, with a strong focus on big data environments, or Data Engineer with strong DevOps skills
  • Data Components Management: Experiences managing and designing data infrastructure, such as Snowflake, PostgreSQL, Kafka, Aerospike, and Object Store
  • DevOps Expertise: Proven experience creating, establishing, and managing big data tools, including automation tasks. Extensive knowledge of DevOps concepts and tools, including Docker, Kubernetes, Terraform, ArgoCD, Linux OS, Networking, Load Balancing, Nginx, etc.
  • Programming Skills: Proficiency in programming languages such as Python and Object-Oriented Programming (OOP), emphasizing big data processing (like PySpark). Experience with scripting languages like Bash and Shell for automation tasks
  • Cloud Platforms: Hands-on experience with major cloud providers such as Azure, Google Cloud, or AWS
Job Responsibility
Job Responsibility
  • Data Architecture Direction: Provide strategic direction for our data architecture, selecting the appropriate componments for various tasks. Collaborate on requirements and make final decisions on system design and implementation
  • Project Management: Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance
  • Cost Optimization: Monitor and optimize cloud costs associated with data infrastructure and processes
  • Efficiency and Reliability: Design and build monitoring tools to ensure the efficiency, reliability, and performance of data processes and systems
  • DevOps Integration: Implement and manage DevOps practices to streamline development and operations, focusing on infrastructure automation, continuous integration/continuous deployment (CI/CD) pipelines, containerization, orchestration, and infrastructure as code. Ensure scalable, reliable, and efficient deployment processes
  • Fulltime
Read More
Arrow Right