This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As a Cloud Enablement Engineer at Confluent, you will be at the intersection of engineering and education, empowering our users and prospects with the knowledge and skills to leverage Kafka and Flink on Confluent Cloud effectively. You will be responsible for guiding our valued users to get started with Confluent Cloud by holding one-on-one conversations and by creating high-quality technical content, including code samples, articles, and webinars, that demonstrate how to set up and use Confluent Cloud for various real-time data use cases like Real Time Data Warehousing, Migration from Open Source Kafka, Custom Connector set up, integrating with AI and ML tools etc.
Job Responsibility:
Develop and maintain high-quality, well-documented guides for various setup, performance tuning, scaling needs that customers often have
Develop and maintain high-quality, well-documented code examples that illustrate the usage of Confluent Cloud with AI and ML tools such as Hugging Face, OpenAI, Mistral, RAG, etc
Create engaging and informative technical articles, tutorials, and how-to guides that cater to various levels of expertise
Host webinars and live coding sessions to walk through code examples and demonstrate Confluent Cloud's capabilities in real-time
Collaborate with the product and engineering teams to stay updated on Confluent Cloud features and best practices
Provide technical support and guidance to users and prospects, helping them overcome challenges and successfully implement Confluent Cloud solutions
Participate in community forums and social media platforms to share knowledge and engage with the Confluent community
Requirements:
Bachelor's degree in Computer Science, Engineering, or a related field
Strong proficiency in Java or Python or C/C++ programming and experience in independently developing, testing and publishing high-quality code
Solid understanding of distributed systems and streaming data architectures
Experience with cloud infrastructure (AWS, Azure, GCP) and containerization technologies (Docker, Kubernetes)
Familiarity with CI/CD pipelines and version control systems (e.g., Git)
Knowledge of data processing frameworks (e.g., Spark, Flink) and databases (SQL and NoSQL)
Ability to troubleshoot and optimize system performance
Excellent communication skills, with the ability to explain complex technical concepts in a clear and engaging manner
Self-motivated and proactive, with a passion for teaching and enabling others to succeed
Nice to have:
2+ years of experience setting up and working with Apache Kafka
A basic understanding of AI and ML concepts and experience integrating Kafka with AI tools such as Hugging Face, OpenAI, Mistral, RAG, etc
Experience in creating technical content such as articles, tutorials, and webinars