CrawlJobs Logo

AI Data Engineer

https://www.hpe.com/ Logo

Hewlett Packard Enterprise

Location Icon

Location:

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

This role has been designed as ‘’Onsite’ with an expectation that you will primarily work from an HPE office. Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today’s complex world. Our culture thrives on finding new and better ways to accelerate what’s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE.

Job Responsibility:

  • Application design and development in streaming or batch mode over Kafka and Spark
  • Evaluate and implement new technologies and tools to improve efficiency and reduce cost
  • Analyze and validate telemetry data, learn error patterns and produce views that show network problem conditions and patterns
  • Work with a team of data scientists, domain experts, architects and other engineers to increase the accuracy of AI outcomes in our device management product
  • Build CI/CD pipelines
  • Work with SMEs and data scientists to increase accuracy of actionable insights

Requirements:

  • Master's Degree in Computer Science, Information Systems, or equivalent
  • At least 4 years of work experience in relevant technologies
  • Bachelor's degree may be considered if candidate demonstrates exceptional abilities
  • 2+ years programming in experience Python
  • 1+ years programming experience in Java
  • Expertise in big data technologies such as Apache Spark or Kafka with at least 1 year of relevant experience
  • Experience with containerization and orchestration tools such as Kubernetes and Airflow with at least 1 year of relevant experience
  • Experience developing applications in Cloud computing environments such as AWS with at least 2 years of relevant experience

Nice to have:

  • Experience with developing Generative-AI and Agentic AI based applications
  • Experience with managing and analyzing large data sets
  • Good understanding of WiFi Wireless Networking, Switching and Routing concepts
What we offer:
  • Health & Wellbeing
  • Personal & Professional Development
  • Unconditional Inclusion

Additional Information:

Job Posted:
July 22, 2025

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.