This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We’re looking for a Backend Engineer to help us scale the systems behind yeet’s real-time observability platform. Our backend is composed primarily of Python micro services, but this isn’t just another “wrap an API around Postgres” role. You’ll be working on streaming systems, high-throughput data ingestion, and async task pipelines that need to move fast, replicate elastically, and stay reliable under load.
Job Responsibility:
Help scale the systems behind yeet’s real-time observability platform
Work on streaming systems, high-throughput data ingestion, and async task pipelines that need to move fast, replicate elastically, and stay reliable under load
Requirements:
Solid experience building backend services in Python, from design to production
Comfortable with streaming data pipelines, pub/sub, and real-time messaging systems
Understand how to design for throughput, reliability, and fault tolerance in distributed systems
Can simulate high-load conditions and deliver code that’s testable, measurable, and production-ready
Enjoy solving problems that don’t fit neatly into a CRUD API and like thinking about performance trade-offs
Pragmatic: know when to optimize and when to ship
Nice to have:
Experience with Kafka, RabbitMQ, Redis, Postgres, or DuckDB in production environments
Familiarity with async frameworks like asyncio, celery, for task queues and background jobs
Hands-on work with real-time protocols (WebSockets, gRPC streaming, event-driven architectures)
Prior exposure to observability or monitoring platforms, especially in high-scale environments
A track record of building or scaling high-throughput ingestion pipelines for data storage and processing
Experience with terraform and comfort contributing to infrastructure design