CrawlJobs Logo

Kafka Engineer

https://www.roberthalf.com Logo

Robert Half

Location Icon

Location:
United States , Santa Clara

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

This role is integral within our client's team, where you will be tasked with administering Confluent Kafka clusters, supporting various Kafka clients and microservices, and creating data flow solutions.

Job Responsibility:

  • Administering and maintaining various aspects of Confluent Kafka clusters including multi DC brokers, connectors, C3, KSQL DB, Rest Proxy, and Schema registry
  • Configuring and managing Kafka topics, RBAC, connectors, KSQL, and schema registry while adhering to security, availability, scalability, and DR standards
  • Supporting Java, Node.js, and Python based Kafka clients and microservices
  • Performing basic administration tasks of Apache Nifi (OSS)
  • Understanding user data flow requirements and designing and developing Kafka based solutions using Confluent Kafka, Connectors KSQL, and Nifi
  • Providing low-code data flow alternatives
  • Utilizing experience with data, cloud (AWS), and Queues Connectors in both design and configuration tasks

Requirements:

  • Minimum of 10 years of detail-oriented experience as a Software Engineer
  • Strong problem-solving abilities and analytical skills
  • Solid understanding of software engineering principles and methodologies
  • Exceptional communication skills, both written and verbal
  • Strong ability to learn new technologies quickly and apply them in problem-solving
  • Bachelor's degree in Computer Science, Information Technology, or a related field
  • Prior experience in managing a team or leading a project will be considered an advantage
  • Proactive approach, with the ability to handle multiple projects simultaneously and meet deadlines
  • Familiarity with other programming languages or technologies is a plus

Nice to have:

Familiarity with other programming languages or technologies

What we offer:
  • Medical, vision, dental, and life and disability insurance
  • Eligibility to enroll in company 401(k) plan

Additional Information:

Job Posted:
March 25, 2025

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Kafka Engineer

DevOps Engineer – Kafka Service

We are looking for a highly skilled DevOps Engineer to take ownership of the Kaf...
Location
Location
Luxembourg , Leudelange
Salary
Salary:
Not provided
https://www.soprasteria.com Logo
Sopra Steria
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in DevOps, Site Reliability Engineering (SRE), or Kafka administration
  • Strong hands-on experience with Apache Kafka (setup, tuning, and troubleshooting)
  • Proficiency in scripting (Python, Bash) and automation tools (Terraform, Ansible)
  • Experience with cloud environments (AWS, Azure, or GCP) and Kubernetes-based Kafka deployments
  • Familiarity with Kafka Connect, KSQL, Schema Registry, Zookeeper
  • Knowledge of logging and monitoring tools (Dynatrace, ELK, Splunk)
  • Understanding of networking, security, and access control for Kafka clusters
  • Experience with CI/CD tools (Jenkins, GitLab, ArgoCD)
  • Ability to analyze logs, debug issues, and propose proactive improvements
  • Excellent problem-solving and communication skills
Job Responsibility
Job Responsibility
  • Kafka Administration & Operations: Deploy, configure, monitor, and maintain Kafka clusters in a high-availability production environment
  • Performance Optimization: Tune Kafka configurations, partitions, replication, and producers/consumers to ensure efficient message streaming
  • Infrastructure as Code (IaC): Automate Kafka infrastructure deployment and management using Terraform, Ansible, or similar tools
  • Monitoring & Incident Management: Implement robust monitoring solutions (e.g., Dynatrace) and troubleshoot performance bottlenecks, latency issues, and failures
  • Security & Compliance: Ensure secure data transmission, access control, and compliance with security best practices (SSL/TLS, RBAC, Kerberos)
  • CI/CD & Automation: Integrate Kafka with CI/CD pipelines and automate deployment processes to improve efficiency and reliability
  • Capacity Planning & Scalability: Analyze workloads and plan for horizontal scaling, resource optimization, and failover strategies
What we offer
What we offer
  • Work among high-level professionals at the forefront of corporate software solutions and innovation at Europe’s Leading Digital Service Provider
  • Fulltime
Read More
Arrow Right

Senior Kafka Platform Engineer

This role is a Senior Platform Engineer working on the Kafka as a Service projec...
Location
Location
United Kingdom , London
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience working in Financial Services or a large complex and/or global environment
  • Experience of the following technologies: Kakfa Ecosystem (Confluent distribution preferred), Kubernetes and Openshift, Java, Python, Ansible
  • Consistently demonstrates clear and concise written and verbal communication
  • Comprehensive knowledge of design metrics, analytics tools, benchmarking activities and related reporting to identify best practices
  • Demonstrated analytic/diagnostic skills
  • Ability to work in a matrix environment and partner with virtual teams
  • Ability to work independently, multi-task, and take ownership of various parts of a project or initiative
  • Ability to work under pressure and manage to tight deadlines or unexpected changes in expectations or requirements
  • Proven track record of operational process change and improvement.
Job Responsibility
Job Responsibility
  • Serve as a technology subject matter expert for internal and external stakeholders and provide direction for all firm mandated controls and compliance initiatives
  • Ensure that all integration of functions meet business goals
  • Define necessary system enhancements to deploy new products and process enhancements
  • Recommend product customization for system integration
  • Identify problem causality, business impact, and root causes
  • Exhibit knowledge of how our own specialty area contributes to the business and apply knowledge of competitors, products and services
  • Advise and mentor junior team members
  • Impact the engineering function by influencing decisions through advice, counsel or facilitating services
  • Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients, and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency.
What we offer
What we offer
  • Equal opportunity employer
  • Global benefits
  • Accessibility accommodations.
  • Fulltime
Read More
Arrow Right

Data Engineer

This role primarily involves designing, creating, and managing large datasets us...
Location
Location
United States , Miami
Salary
Salary:
Not provided
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proficiency in Apache Kafka, Apache Pig, and Apache Spark
  • Comprehensive understanding of Cloud Technologies
  • Ability to create and interpret Data Visualization
  • Experience with Algorithm Implementation
  • Strong background in Analytics
  • Familiarity with Apache Hadoop
  • Expertise in API Development
  • Proficient in AWS Technologies
  • Experience with Google Data Studio
Job Responsibility
Job Responsibility
  • Develop and implement algorithms to enhance data processing and analytics
  • Utilize tools like Apache Kafka, Apache Pig, and Apache Spark for data management and processing
  • Leverage cloud technologies for efficient data storage and retrieval
  • Collaborate with the team to develop APIs for data usage and sharing
  • Apply AWS Technologies for managing and processing large datasets
  • Implement data visualization strategies to represent data in a comprehensible way
  • Use Google Data Studio for effective data reporting and representation
  • Work with Apache Hadoop for distributed processing of large data sets across clusters
  • Ensure the implementation of efficient algorithms for data processing and analytics
  • Continuously monitor, refine and report on the performance of data management systems
What we offer
What we offer
  • Medical, vision, dental, and life and disability insurance
  • Eligibility to enroll in company 401(k) plan
  • Fulltime
Read More
Arrow Right

Data Engineer

We are seeking a Data Engineer to join our team based in Bethesda, Maryland. As ...
Location
Location
United States , Bethesda
Salary
Salary:
Not provided
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proficiency in Apache Kafka, Apache Pig, and Apache Spark
  • Extensive knowledge of cloud technologies
  • Demonstrated ability in data visualization
  • Experience with algorithm implementation
  • Strong analytics skills
  • Expertise in Apache Hadoop
  • Proven experience in API development
  • Familiarity with AWS technologies
Job Responsibility
Job Responsibility
  • Design robust data pipelines within Azure Data Lake
  • Implement effective data warehousing strategies
  • Collaborate with Power BI developers
  • Conduct data validation and audits
  • Troubleshoot pipeline processes
  • Work cross-functionally with different teams
  • Utilize Apache Kafka, Apache Pig, Apache Spark, and other cloud technologies
  • Develop APIs and use AWS technologies
  • Leverage Apache Hadoop for effective data management and analytics
What we offer
What we offer
  • Medical, vision, dental, and life and disability insurance
  • Eligibility to enroll in company 401(k) plan
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a highly skilled Senior Data Engineer to join our team on a l...
Location
Location
United States , Dallas
Salary
Salary:
Not provided
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Engineering, or a related discipline
  • At least 7 years of experience in data engineering
  • Strong background in designing and managing data pipelines
  • Proficiency in tools such as Apache Kafka, Airflow, NiFi, Databricks, Spark, Hadoop, Flink, and Amazon S3
  • Expertise in programming languages like Python, Scala, or Java for data processing and automation
  • Strong knowledge of both relational and NoSQL databases
  • Experience with Kubernetes-based data engineering and hybrid cloud environments
  • Familiarity with data modeling principles, governance frameworks, and quality assurance processes
  • Excellent problem-solving, analytical, and communication skills
Job Responsibility
Job Responsibility
  • Design and implement robust data pipelines and architectures to support data-driven decision-making
  • Develop and maintain scalable data pipelines using tools like Apache Airflow, NiFi, and Databricks
  • Implement and manage real-time data streaming solutions utilizing Apache Kafka and Flink
  • Optimize and oversee data storage systems with technologies such as Hadoop and Amazon S3
  • Establish and enforce data governance, quality, and security protocols
  • Manage complex workflows and processes across hybrid and multi-cloud environments
  • Work with diverse data formats, including Parquet and Avro
  • Troubleshoot and fine-tune distributed data systems
  • Mentor and guide engineers at the beginning of their careers
What we offer
What we offer
  • Medical, vision, dental, and life and disability insurance
  • 401(k) plan
  • Free online training
  • Fulltime
Read More
Arrow Right

Data Engineer

Our client is a rapidly growing technology company revolutionizing the automotiv...
Location
Location
Japan , Tokyo
Salary
Salary:
7000000.00 - 13000000.00 JPY / Year
https://www.randstad.com Logo
Randstad
Expiration Date
May 14, 2026
Flip Icon
Requirements
Requirements
  • 3+ years of experience in data engineering or a similar role
  • Proven experience with data pipelines and infrastructure on AWS (S3, Kinesis Firehose)
  • Hands-on experience with Kafka
  • Proficiency in SQL and Python (or similar)
  • Experience with data governance and quality control
  • Experience creating reports and visualizations using data visualization tools
  • Understanding of data modeling and database design (relational and NoSQL)
  • Excellent collaboration and communication skills
  • Experience with agile methodologies
  • Ability to work effectively in English
Job Responsibility
Job Responsibility
  • Lead the development of cutting-edge data pipelines and build the future of electric vehicles in a dynamic and collaborative environment
  • Design, build, and manage the data infrastructure that powers our company's data-driven decisions
  • Leverage cloud technologies (primarily AWS) to ensure reliable data flow
  • Work closely with data analysts and product teams
  • Contribute to insightful data exploration and reporting
  • Design and implement scalable data pipelines using Kafka, AWS Kinesis Firehose, and Kubernetes
  • Manage and optimize data storage solutions on AWS S3
  • Develop ETL/ELT processes for data transformation
  • Monitor and optimize data infrastructure performance and reliability
  • Implement data quality checks and governance best practices
What we offer
What we offer
  • 休憩室
  • 更衣室
  • 健康保険
  • 厚生年金保険
  • 雇用保険
  • Fulltime
Read More
Arrow Right

Principal Data Engineer

PointClickCare is searching for a Principal Data Engineer who will contribute to...
Location
Location
United States
Salary
Salary:
183200.00 - 203500.00 USD / Year
pointclickcare.com Logo
PointClickCare
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Principal Data Engineer with at least 10 years of professional experience in software or data engineering, including a minimum of 4 years focused on streaming and real-time data systems
  • Proven experience driving technical direction and mentoring engineers while delivering complex, high-scale solutions as a hands-on contributor
  • Deep expertise in streaming and real-time data technologies, including frameworks such as Apache Kafka, Flink, and Spark Streaming
  • Strong understanding of event-driven architectures and distributed systems, with hands-on experience implementing resilient, low-latency pipelines
  • Practical experience with cloud platforms (AWS, Azure, or GCP) and containerized deployments for data workloads
  • Fluency in data quality practices and CI/CD integration, including schema management, automated testing, and validation frameworks (e.g., dbt, Great Expectations)
  • Operational excellence in observability, with experience implementing metrics, logging, tracing, and alerting for data pipelines using modern tools
  • Solid foundation in data governance and performance optimization, ensuring reliability and scalability across batch and streaming environments
  • Experience with Lakehouse architectures and related technologies, including Databricks, Azure ADLS Gen2, and Apache Hudi
  • Strong collaboration and communication skills, with the ability to influence stakeholders and evangelize modern data practices within your team and across the organization
Job Responsibility
Job Responsibility
  • Lead and guide the design and implementation of scalable streaming data pipelines
  • Engineer and optimize real-time data solutions using frameworks like Apache Kafka, Flink, Spark Streaming
  • Collaborate cross-functionally with product, analytics, and AI teams to ensure data is a strategic asset
  • Advance ongoing modernization efforts, deepening adoption of event-driven architectures and cloud-native technologies
  • Drive adoption of best practices in data governance, observability, and performance tuning for streaming workloads
  • Embed data quality in processing pipelines by defining schema contracts, implementing transformation tests and data assertions, enforcing backward-compatible schema evolution, and automating checks for freshness, completeness, and accuracy across batch and streaming paths before production deployment
  • Establish robust observability for data pipelines by implementing metrics, logging, and distributed tracing for streaming jobs, defining SLAs and SLOs for latency and throughput, and integrating alerting and dashboards to enable proactive monitoring and rapid incident response
  • Foster a culture of quality through peer reviews, providing constructive feedback and seeking input on your own work
What we offer
What we offer
  • Benefits starting from Day 1!
  • Retirement Plan Matching
  • Flexible Paid Time Off
  • Wellness Support Programs and Resources
  • Parental & Caregiver Leaves
  • Fertility & Adoption Support
  • Continuous Development Support Program
  • Employee Assistance Program
  • Allyship and Inclusion Communities
  • Employee Recognition … and more!
  • Fulltime
Read More
Arrow Right

Senior Software Engineer

The Wikimedia Foundation is looking for a Senior Software Engineer to join our t...
Location
Location
United States of America
Salary
Salary:
141352.00 - 175725.00 USD / Year
wikimediafoundation.org Logo
Wikimedia Foundation
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Being comfortable working in a semi-ambiguous environment, similar to that of a startup
  • Experience in supporting complex web applications running on Amazon Web Services or other comparable cloud platforms
  • Experience working with Kafka or similar distributed event processing systems
  • Experience working with Nodejs and Go applications
  • Comfortable with configuration management and orchestration tools (ECS, Kubernetes), and modern observability infrastructure (monitoring, metrics and logging)
  • Aptitude for automation and streamlining of tasks
  • Comfortable with shell and scripting languages used in an SRE/Operations engineering context (e.g. Python, Go, Bash, Ruby, etc.)
  • Good understanding of Linux/Unix fundamentals and debugging skills
  • Strong English language skills and ability to work independently, as an effective part of a globally distributed team
  • B.S. or M.S. in Computer Science or equivalent in related work experience
Job Responsibility
Job Responsibility
  • Bringing your creativity to improve our current infrastructure
  • Being a key part of planning our future technical roadmap
  • Maintaining and improving the reliability of highly used commercial data feeds
  • Supporting new code/feature deployments
  • Troubleshooting, debugging and following-up on emerging issues in our application stack and its surroundings
  • Assisting in the architectural design of new services and making them operate at scale
  • Incident response, diagnosis and follow-up on system outages or alerts across Wikimedia Enterprise’s production infrastructure
  • Sharing our values and work in accordance with them
  • Fulltime
Read More
Arrow Right