This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
This role has been designed as ‘’Onsite’ with an expectation that you will primarily work from an HPE office. Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today’s complex world. Our culture thrives on finding new and better ways to accelerate what’s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE.
Job Responsibility:
Application design and development in streaming or batch mode over Kafka and Spark
Evaluate and implement new technologies and tools to improve efficiency and reduce cost
Analyze and validate telemetry data, learn error patterns and produce views that show network problem conditions and patterns
Work with a team of data scientists, domain experts, architects and other engineers to increase the accuracy of AI outcomes in our device management product
Build CI/CD pipelines
Work with SMEs and data scientists to increase accuracy of actionable insights
Requirements:
Master's Degree in Computer Science, Information Systems, or equivalent
At least 4 years of work experience in relevant technologies
Bachelor's degree may be considered if candidate demonstrates exceptional abilities
2+ years programming in experience Python
1+ years programming experience in Java
Expertise in big data technologies such as Apache Spark or Kafka with at least 1 year of relevant experience
Experience with containerization and orchestration tools such as Kubernetes and Airflow with at least 1 year of relevant experience
Experience developing applications in Cloud computing environments such as AWS with at least 2 years of relevant experience
Nice to have:
Experience with developing Generative-AI and Agentic AI based applications
Experience with managing and analyzing large data sets
Good understanding of WiFi Wireless Networking, Switching and Routing concepts
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.