This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
BA Markets wants to professionalise and streamline its activities with regards to data streaming. Therefore, a new agile team has been created to build up a technical platform for BA Markets users to get a hub for all required streaming services. The team will develop the service suite largely internally but will also rely on managed vendor services, such as Databricks. The streaming technology used is Kafka. Many BA Markets users are technically quite advanced so the central data streaming team must constantly keep up to date to deliver state of the art services.So, there will be constant development opportunities for the candidate, and the candidate must be comfortable to work in a fast-changing, international environment. The team consists of four members in Hamburg, one member in Stockholm and should also have three colleagues in Poland. This is a hybrid role to support the Data Engineers and Software developers with automatization and simplification for the users. Vision for this role should be to “Simplify the user experience”.
Job Responsibility:
Stream deployment and stream architecture, developments and deployments
Automate workflows and orchestrate data pipelines
Implement CI/CD routines
Implement and monitor “system health” with observability tools and data quality checks
Support the development of Client Libraries so other applications can integrate streams in own application and services
Perform Python development
Perform “glue code” development that 95% of use cases can apply
Requirements:
Interest in understanding what the user needs with several years of hands-on experience as software developer with an interest in the responsibilities of a data engineer or vice versa
A proactive, communicative team player
Fluent in English
Deep understanding of Kafka architecture (brokers, topics, partitions, replication), and experience with Kafka Streams, Kafka Connect, and schema registry (e.g., Confluent)
Proficiency in designing and managing Kafka clusters (including monitoring and scaling)
Hands-on Experience building and maintaining real-time ETL pipelines
Familiarity with stream processing frameworks like: Apache Flink or Apache Spark Streaming
Strong skills in: Python and Java and at least basic Scala and at least solid SQL experience
Has build several CI/CD pipelines (e.g., Jenkins, GitLab CI, GitHub Actions)
And used infrastructure as Code (IaC) tools like Terraform or Ansible
Hands-on performed containerization (Docker) and orchestration (Kubernetes) tasks
Knows monitoring tools like Grafana, ELK stack, Datadog
And ideally also Kafka-specific monitoring tools such as (e.g., Burrow, Confluent Control Center)
Nice to have:
Experience with Databricks components is a merit
Experience with schema management best practices
What we offer:
Good remuneration
Challenging and international work environment
Possibility to work with some of the best in the field
Working in interdisciplinary teams
Support from committed colleagues
Attractive employment conditions
Opportunities for personal and professional development
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.