Job Description:
ABOUT THE JOB Design, implement, and maintain scalable backend services using Java and Python to support graph platform and integrate with systems such as Jira, Codebeamer, and Splunk Develop and optimize high-performance data pipelines leveraging Apache Kafka, ensuring reliable and real-time data streaming across the tool ecosystem Apply expertise in SPARQL, GraphQL, and RML to build advanced data models, mappings, and APIs, transforming diverse data sources into a unified, tool-agnostic ontology Collaborate closely with international development teams and stakeholders to gather requirements, design innovative features, and deliver solutions aligned with business objectives Participate in peer code reviews, maintaining high standards of code quality, and assume end-to-end responsibility for the maintenance and support of developed solutions. ABOUT YOU Bachelor's Degree in Computer Science, Software Engineering, or related field Minimum 3 years of professional experience designing and developing complex backend applications, with a strong background in Java and proficiency or knowledge in Python Hands-on experience with Apache Kafka or similar technologies, and a solid understanding of event-driven architectures Experience or strong theoretical background in Knowledge Graphs, SPARQL, GraphQL, and RML Good communication in English Understanding of APIs and architectures of tools such as Jira, Codebeamer, or IBM ETM is a plus Experience with generative AI tools or an interest in Large Language Models (LLMs) is a plus You demonstrate strong problem-solving skills and a sense of ownership, with the ability to independently address complex technical challenges and deliver robust solutions Excellent communication skills and thrive in collaborative, agile, and globally distributed teams.