CrawlJobs Logo

Kafka Devops Engineer

https://www.citi.com/ Logo

Citi

Location Icon

Location:
Canada , Mississauga

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

94300.00 - 141500.00 USD / Year

Job Description:

Join the Kafka as-a-Service (KaaS) App & Infra Operations team, operating within a DevSecOps model and collaborating closely with KaaS Engineering. This role is focused on providing technical support and production management functions to ensure the robust operation, stability, and scaling of our Kafka product offering across the bank.

Job Responsibility:

  • Contribute to and participate in the global Kafka as-a-Service (KaaS) App & Infra Operations function, including a follow-the-sun support model
  • Collaborate closely with platform engineering teams to align on shared goals and serve as a primary liaison between platform users/tenants and the engineering team for queries
  • Execute daily operational tasks including start-of-day checks, continuous monitoring, and regional handovers
  • Provide L1/L2 technical support, incident/problem resolution, and manage releases, ensuring stability, quality, and functionality against service level expectations
  • Implement and carry out internal business continuity (CoB) and resiliency procedures, including performing post-release and infrastructure update health checks
  • Manage critical operational processes such as Change & Release Management and Incident & Problem Management
  • Develop and maintain comprehensive technical support documentation
  • Actively contribute to, and where applicable, lead initiatives focused on improving the stability, efficiency, and effectiveness of the KaaS offering
  • Participate in and, where applicable, provide technical oversight, overseeing technical issue escalation and resolution of major system outages, ensuring clear communication to all interested parties
  • Perform other duties and functions as assigned

Requirements:

  • 5-8 years of relevant experience
  • Proven experience in an Application & Infrastructure Operations role
  • Solid experience working with the Kafka ecosystem (Kafka Brokers, Connect, Zookeeper) in a production environment
  • Exposure to the Confluent stack (Confluent ksqlDB, Rest Proxy, Schema Registry, Control Center) is a plus
  • Proficiency in automation tools, specifically Ansible, for managing, installing, and upgrading Kafka services across multiple clusters
  • Demonstrated experience in administering Unix systems and the applications deployed on them
  • Proficiency in using monitoring systems such as Grafana and Splunk for application health checks
  • Track record of proposing and driving operational efficiency and process improvements within support functions
  • Skilled in communicating effectively with relevant stakeholders at all levels
  • Experience with issue tracking/reporting tools and Problem Management tools
  • Bachelor’s degree/University degree or equivalent experience

Nice to have:

  • Exposure to the Confluent stack (Confluent ksqlDB, Rest Proxy, Schema Registry, Control Center) is a plus
  • Experience with other distributed messaging/computing platforms and a willingness to learn Kafka will also be considered

Additional Information:

Job Posted:
December 28, 2025

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Kafka Devops Engineer

DevOps Engineer – Kafka Service

We are looking for a highly skilled DevOps Engineer to take ownership of the Kaf...
Location
Location
Luxembourg , Leudelange
Salary
Salary:
Not provided
https://www.soprasteria.com Logo
Sopra Steria
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in DevOps, Site Reliability Engineering (SRE), or Kafka administration
  • Strong hands-on experience with Apache Kafka (setup, tuning, and troubleshooting)
  • Proficiency in scripting (Python, Bash) and automation tools (Terraform, Ansible)
  • Experience with cloud environments (AWS, Azure, or GCP) and Kubernetes-based Kafka deployments
  • Familiarity with Kafka Connect, KSQL, Schema Registry, Zookeeper
  • Knowledge of logging and monitoring tools (Dynatrace, ELK, Splunk)
  • Understanding of networking, security, and access control for Kafka clusters
  • Experience with CI/CD tools (Jenkins, GitLab, ArgoCD)
  • Ability to analyze logs, debug issues, and propose proactive improvements
  • Excellent problem-solving and communication skills
Job Responsibility
Job Responsibility
  • Kafka Administration & Operations: Deploy, configure, monitor, and maintain Kafka clusters in a high-availability production environment
  • Performance Optimization: Tune Kafka configurations, partitions, replication, and producers/consumers to ensure efficient message streaming
  • Infrastructure as Code (IaC): Automate Kafka infrastructure deployment and management using Terraform, Ansible, or similar tools
  • Monitoring & Incident Management: Implement robust monitoring solutions (e.g., Dynatrace) and troubleshoot performance bottlenecks, latency issues, and failures
  • Security & Compliance: Ensure secure data transmission, access control, and compliance with security best practices (SSL/TLS, RBAC, Kerberos)
  • CI/CD & Automation: Integrate Kafka with CI/CD pipelines and automate deployment processes to improve efficiency and reliability
  • Capacity Planning & Scalability: Analyze workloads and plan for horizontal scaling, resource optimization, and failover strategies
What we offer
What we offer
  • Work among high-level professionals at the forefront of corporate software solutions and innovation at Europe’s Leading Digital Service Provider
  • Fulltime
Read More
Arrow Right

Senior Kafka Platform Engineer

This role is responsible for the management and operational excellence of the Ka...
Location
Location
Poland
Salary
Salary:
Not provided
https://www.hsbc.com Logo
HSBC
Expiration Date
January 30, 2026
Flip Icon
Requirements
Requirements
  • Must be able to communicate on technical levels with Engineers and stakeholders
  • Strong problem solving and analytical skills
  • Thorough understanding of the Kafka Architecture
  • Familiar with cluster maintenance processes and implementing changes and recommended fixes to Kafka clusters and topics to protect production
  • Experience operating in an infrastructure as code and automation first principles environment
  • Key Technologies Messaging technologies –Apache Kafka, Confluent Kafka
  • DevOps toolsets – GitHub, JIRA, Confluence, Jenkins
  • Automation – Ansible, Puppet or similar
  • Monitoring –Observability tools such as DataDog, NewRelic, Prometheus, Grafana
Job Responsibility
Job Responsibility
  • Provide expertise in Kafka brokers, zookeepers, Kafka connect, schema registry, KSQL, Rest proxy and Kafka Control Center
  • Provide expertise and hands on experience working on Kafka connect using schema registry in a very high volume
  • Provide administration and operations of the Kafka platform - provisioning, access lists Kerberos and SSL configurations
  • Provide expertise and hands on experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connector, JMS source connectors, Tasks, Workers, converters, Transforms
  • Provide expertise and hands on experience on custom connectors using the Kafka core concepts and API
  • Create topics, setup redundancy clusters, deploy monitoring tools, and configure appropriate alerts and create stubs for producers, consumers, and consumer groups for helping onboard applications from different languages/platforms
  • As a Kafka SRE you will conduct root cause analysis of production incidents, document for reference and put into place proactive measures to enhance system reliability
  • Automate routine tasks using scripts or automation tools and perform data related benchmarking, performance analysis and tuning
  • Conduct root cause analysis of production incidents, documenting for reference and initiating proactive measures to enhance system reliability
What we offer
What we offer
  • Competitive salary
  • Annual performance-based bonus
  • Additional bonuses for recognition awards
  • Multisport card
  • Private medical care
  • Life insurance
  • One-time reimbursement of home office set-up (up to 800 PLN)
  • Corporate parties & events
  • CSR initiatives
  • Nursery discounts
  • Fulltime
Read More
Arrow Right
New

DevOps Engineer

BioCatch is the leader in Behavioral Biometrics, a technology that leverages mac...
Location
Location
Israel , TLV
Salary
Salary:
Not provided
biocatch.com Logo
BioCatch
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience: Demonstrated experience as a DevOps professional, with a strong focus on big data environments, or Data Engineer with strong DevOps skills
  • Data Components Management: Experiences managing and designing data infrastructure, such as Snowflake, PostgreSQL, Kafka, Aerospike, and Object Store
  • DevOps Expertise: Proven experience creating, establishing, and managing big data tools, including automation tasks. Extensive knowledge of DevOps concepts and tools, including Docker, Kubernetes, Terraform, ArgoCD, Linux OS, Networking, Load Balancing, Nginx, etc.
  • Programming Skills: Proficiency in programming languages such as Python and Object-Oriented Programming (OOP), emphasizing big data processing (like PySpark). Experience with scripting languages like Bash and Shell for automation tasks
  • Cloud Platforms: Hands-on experience with major cloud providers such as Azure, Google Cloud, or AWS
Job Responsibility
Job Responsibility
  • Data Architecture Direction: Provide strategic direction for our data architecture, selecting the appropriate componments for various tasks. Collaborate on requirements and make final decisions on system design and implementation
  • Project Management: Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance
  • Cost Optimization: Monitor and optimize cloud costs associated with data infrastructure and processes
  • Efficiency and Reliability: Design and build monitoring tools to ensure the efficiency, reliability, and performance of data processes and systems
  • DevOps Integration: Implement and manage DevOps practices to streamline development and operations, focusing on infrastructure automation, continuous integration/continuous deployment (CI/CD) pipelines, containerization, orchestration, and infrastructure as code. Ensure scalable, reliable, and efficient deployment processes
  • Fulltime
Read More
Arrow Right

DevOps Engineer

We are looking for a skilled and proactive Middle DevOps Engineer to join our te...
Location
Location
Latvia , Riga
Salary
Salary:
Not provided
omnic.net Logo
OMNIC
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2–3+ years in a DevOps Engineer role
  • Strong hands-on experience with Docker
  • At least 1–2 years of production-level Kubernetes experience (including Helm)
  • AWS Required (EC2, Lambda, ECS, EKS, VPC, Subnets, Security Groups, NACLs, Route 53, ALB/NLB, S3, EBS, EFS, RDS (PostgreSQL), ElastiCache (Redis), MSK (Kafka), CloudWatch, CloudTrail, IAM, Organizations, SQS, SNS)
  • GCP a Plus
  • Proven experience with GitLab CI or similar tools
  • Experience with AWS CodePipeline/CodeBuild is a plus
  • Familiarity with Prometheus, Grafana, ELK Stack, Loki, Promtail
  • Strong proficiency in Terraform (CloudFormation is a plus)
  • Solid understanding of networking protocols (TCP/IP, HTTP/S, DNS) and security
Job Responsibility
Job Responsibility
  • Monitor system performance and ensure high availability
  • Deploy and support new services in production
  • Automate development and infrastructure processes (CI/CD, deployment, rollback)
  • Troubleshoot and resolve infrastructure and application incidents
  • Maintain and improve Kubernetes clusters and AWS resources
  • Handle infrastructure tasks like setting up VPNs with partner environments and provisioning new project environments
  • Collaborate with Dev, QA, and PM teams
  • Maintain and update infrastructure documentation
What we offer
What we offer
  • Competitive salary and monthly bonuses based on performance
  • Collaborative team environment focused on innovation
  • Professional growth opportunities through cross-functional training and mentorship programs
  • Participation in meaningful projects that promote sustainable business transformation
  • Flexible working conditions with a hybrid/onsite model
  • Fulltime
Read More
Arrow Right

Senior DevOps Engineer

We are seeking a Senior DevOps Engineer on behalf of a global company specializi...
Location
Location
Israel
Salary
Salary:
Not provided
https://gitmax.com/ Logo
Gitmax
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2-3+ years of experience as a DevOps Engineer or SRE in a high-load/big data production environment
  • Proven expertise in deploying Kubernetes clusters from scratch
  • Strong experience with load balancers (e.g., HAProxy, Nginx, Ingress)
  • Proficiency in Python is essential
  • Java knowledge is a significant advantage
  • Hands-on experience with Terraform and Helm
  • Strong background in Linux administration
  • Experience working with CI/CD tools such as Jenkins or similar
  • English – B2+
  • proficiency in Russian is an advantage
Job Responsibility
Job Responsibility
  • Collaborate as part of a small team of DevOps engineers to enhance system performance and development processes
  • Tackle bottlenecks to boost bidder processing capacity, enhance developer experience, and optimize costs
  • Drive a large-scale migration project from AWS to GCP
  • Manage and refine CI/CD pipelines to ensure smooth and efficient deployment processes
  • Oversee system monitoring, scaling, and the maintenance of Kubernetes clusters (on-premise/bare-metal)
  • Utilize technologies like Kubernetes, Kafka, Docker, Jenkins, HAProxy, Aerospike, and Clickhouse
  • Ensure the company’s tech environment remains secure and protected from potential threats
What we offer
What we offer
  • Opportunity to work for a stable, innovative global company with cutting-edge technology
  • Fully remote position with flexibility
  • Work on meaningful projects, including large-scale system migration and high-performance optimization
  • Collaborate with a team of experts in a fast-moving, innovative field
  • Fulltime
Read More
Arrow Right
New

Sr Staff/Principal Devops Engineer

Balbix is looking for a DevOps Sr Staff/Principal Engineer to join our growing t...
Location
Location
India , Delhi
Salary
Salary:
Not provided
balbix.com Logo
Balbix
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science or a related field
  • 10+ years of experience in DevOps for Sr Staff or 12-15 years for Principal
  • 4+ years of experience setting up and managing infrastructure in AWS for a product development organization
  • Ability to independently architect, design, document, and implement complex platforms and complex DevOps systems
  • Solid understanding of AWS infrastructure and services such as load balancers (ALB/ELB), IAM, KMS, Networking, EC2, CloudWatch, CloudTrail, CloudFormation, Lambda, etc.
  • 4+ years of experience building infrastructure using Terraform
  • 3+ years of solid experience with Kubernetes and Helm
  • Expert-level programming experience with Python for scripting and automation
  • Excellent knowledge of working on configuration management systems such as Ansible
  • Hands-on experience with CI/CD code management and deployment technologies like GitLab, Jenkins, or similar
Job Responsibility
Job Responsibility
  • Lead the development of critical DevOps projects, set technical direction, and influence the organization's technical strategy
  • Solve complex problems, mentor senior engineers, and collaborate with cross-functional teams to deliver high-impact DevOps solutions
  • Design and develop IaC components for Balbix solutions and internal engineering tools running in AWS
  • Build and deploy a state-of-the-art security SaaS platform using the latest CI/CD techniques, ensuring it is fully automated, repeatable, and secure
  • Secure infrastructure using best practices (e.g., TLS, bastion hosts, certificate management, authentication and authorization, network segmentation)
  • Design and develop a scalable, cost-efficient deployment infrastructure on Kubernetes
  • Design and implement consistent observability systems for Balbix solutions
  • Participate in on-call rotation
  • Fulltime
Read More
Arrow Right
New

Senior DevOps Engineer - ElasticSearch Admin

You will be part of a high-performing team, leading and executing to enable grow...
Location
Location
Germany , Berlin
Salary
Salary:
Not provided
auto1.com Logo
AUTO1 Group
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Hands-on experience administrating Elasticsearch clusters (5+ Data nodes)
  • Knowledge of planning and executing data retention and life cycle management, Index and Datastream mappings, as well as ML and transform jobs
  • Hands-on experience in operations of sizing, monitoring, and management, for Kafka, Logstash, Beats, Kibana, and Elastic Agent
  • Experience with queuing systems and data streams in production (SQS, ActiveMQ, Kinesis, Kafka or similar)
  • Familiarity with programming languages such as: PHP and/or Python and/or Java
  • 4+ years of experience in administering/developing/DevOps in a Linux/Unix environment
  • AWS Expert
  • Experience in creating CI/CD pipelines preferably using Jenkins
  • Experience with docker orchestration engines (ECS, Kubernetes, swarm, UCP, etc)
  • Significant experience with Docker, Terraform, or CloudFormation
Job Responsibility
Job Responsibility
  • Maintenance, support, and ongoing performance enhancements on multiple Elastic instances
  • Performing system upgrades, troubleshooting, and resolving infrastructure and system issues, as well as log ingestion and communication issues
  • Design and develop scalable, robust, and high-performance data pipelines and data storage solutions
  • Develop and maintain observability frameworks using tools like Kibana, Grafana, or similar
  • Work with cross-functional teams to define observability and search requirements
  • Scale, script and maintain our development and production platform foundation with AWS and GCP
  • Stay updated on the newest tools and (cloud) services
  • Perform database backups, migrations, and upgrades as needed
  • Discuss and evangelize for new technologies and best practices amongst and outside of your team
What we offer
What we offer
  • Relocation support to Germany which includes visa assistance, apartment search and help with costs
  • Educational budget for your personal growth
  • Above-average corporate pension plan
  • Work from home up to 5 days a week
  • Truly international and diverse working environment with more than 90 different nationalities
  • Fulltime
Read More
Arrow Right

Senior DevOps Cloud Engineer

A Senior DevOps Cloud Engineer in the HPE Networking Business designs, develops ...
Location
Location
United States , Roseville
Salary
Salary:
133500.00 - 307000.00 USD / Year
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in computer science, engineering, information systems, or closely related quantitative discipline
  • typically, 10+ years’ experience
  • proven track record of designing, implementing, and supporting multi-tier architectures in an enterprise scale organization
  • experience leading teams of engineers, providing technical direction and oversight
  • strong programming skills in Python
  • experience with Tcl, C/C++, and JavaScript a plus
  • good understanding of distributed systems, event-driven programming paradigms, and designing for scale and performance
  • experience with cloud-native applications, developer tools, managed services, and next-generation databases
  • knowledge of DevOps practices like CI/CD, infrastructure as code, containerization, and orchestration using Kubernetes, Redis, Kafka
  • good written and verbal communication skills and agile in a changing environment.
Job Responsibility
Job Responsibility
  • Analyses new or enhancement feature requests and determines the required coding, testing, and integration activities
  • designs and develops moderate to complex software modules per feature specifications adhering to quality and security policies
  • identifies debugs and creates solutions for issues with code and integration into application architecture
  • develops and executes comprehensive test plans for features adhering to performance, scale, usability, and security requirements
  • deploys cloud-based systems and application code using continuous integration/deployment (CI/CD) pipelines to automate cloud applications' management, scaling, and deployment
  • contributes towards innovation and integration of new technologies into projects
  • analyzes science, engineering, business, and other data processing problems to develop and implement solutions to complex application problems, system administration issues, or network concerns.
What we offer
What we offer
  • Comprehensive suite of benefits that supports physical, financial and emotional wellbeing
  • programs catered to helping you reach career goals
  • unconditional inclusion in the workplace.
  • Fulltime
Read More
Arrow Right
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.