This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
HPE is seeking an experienced Temporal Engineer with 7+ years of expertise in designing, deploying, and managing workflow orchestration platforms, with strong hands-on knowledge of Temporal. The role involves enabling reliable, scalable, and fault-tolerant execution of distributed applications while integrating Temporal with microservices, CI/CD, and cloud-native ecosystems.
Job Responsibility:
Platform Deployment & Management: Install, configure, and manage Temporal clusters (on-prem and cloud)
Ensure high availability, scalability, and fault tolerance of workflow orchestration
Manage upgrades, patches, and lifecycle of Temporal services
Workflow Development & Integration: Collaborate with development teams to design and implement Temporal workflows
Integrate Temporal with microservices, data pipelines, and DevOps platforms
Support multiple languages (Go, Java, Python) for workflow execution
Monitoring & Troubleshooting: Implement monitoring and observability using Prometheus, Grafana, ELK, OpenTelemetry
Troubleshoot workflow failures, latency issues, and worker scalability challenges
Build automation scripts and runbooks for operational efficiency
Security & Governance: Configure authentication, RBAC, and TLS encryption for Temporal services
Ensure compliance with enterprise security frameworks
Collaborate with cybersecurity teams for vulnerability management and Zero Trust alignment
Collaboration & Support: Work closely with DevOps, cloud, and application teams for seamless adoption
Provide guidance on workflow best practices, retries, error handling, and compensation logic
Offer L3 support for production workflows and DR readiness
Requirements:
7+ years of experience in workflow orchestration, distributed systems, or backend engineering
Hands-on expertise with Temporal (or Cadence), including cluster setup and workflow authoring
Strong understanding of microservices architectures and event-driven systems
Proficiency in Go, Java, or Python for writing workflows and activities
Experience with Kubernetes, Docker, and cloud platforms (AWS/GCP/Azure)
Familiarity with CI/CD pipelines and GitOps practices
Bachelor's or Master's degree in Computer Science, IT, or related field
Nice to have:
Experience with data pipeline orchestration tools (Airflow, Argo Workflows, Prefect)
Exposure to Kafka, RabbitMQ, or other event streaming/message queue systems
Knowledge of performance tuning and horizontal scaling for Temporal clusters
Kubernetes (CKA/CKAD) or Cloud certifications (AWS, Azure, GCP)
Temporal community or enterprise training
What we offer:
Health & Wellbeing: comprehensive suite of benefits supporting physical, financial and emotional wellbeing
Personal & Professional Development: programs to help reach career goals
Unconditional Inclusion: inclusive work environment celebrating individual uniqueness
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.