This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Coralogix is a modern, full-stack observability platform transforming how businesses process and understand their data. Our unique architecture powers in-stream analytics without reliance on expensive indexing or hot storage. We specialize in comprehensive monitoring of logs, metrics, trace and security events with features such as APM, RUM, SIEM, Kubernetes monitoring and more, all enhancing operational efficiency and reducing observability spend by up to 70%. We are seeking engineers with hands-on experience in developing and managing distributed systems and microservices in a production environment. The ideal candidate should have a solid background in infrastructure and operations. The team builds high-throughput ingestion and processing pipelines, efficient storage on object stores, and systems for data governance, usage reporting, and routing data to multiple physical locations. You will work on cloud-native, production systems written in Rust and Scala, operating at scale with Kafka, Postgres, Redis, and object storage, run on Kubernetes and operate in a multi-cloud environment.
Job Responsibility:
Develop and operate distributed systems in production
Build Kafka-based ingestion and processing pipelines
Design systems for data governance, retention, deletion, and usage reporting
Work with Postgres and Redis, requiring solid database design and operational knowledge
Implement efficient persistence using column-oriented data formats and object storage
Requirements:
Located in Israel
5+ years of software development experience
Production experience with large-scale Apache Kafka or comparable distributed data streaming platforms
Strong understanding of distributed systems, databases, and production operations
Experience with Scala or Rust
B.Sc. in Computer Science or an equivalent field
Nice to have:
Experience with Kafka Streams or similar frameworks
Experience with column-oriented data formats and large-scale analytical storage systems