This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
At Unleash live, we don’t just follow trends—we set them. We are a world-leading AI video analytics platform transforming high-velocity visual data into real-time, actionable insights for the world’s most critical industries. From autonomous drone inspections to real-time traffic optimization, we operate at the frontier of what’s possible. We are looking for an engineer energized by turning experimental technology into enterprise-grade reality—whether in the cloud or on a remote sensor in the field. You will join our high-octane MLOps Team, the architectural heart of our AI ecosystem and the bridge between digital intelligence and physical reality. We manage enabling the entire model lifecycle—from training and optimization to large-scale deployment. We build ecosystems that allow models to thrive in the wild, ensuring our intelligence is always accurate, always on, and ahead of the curve. This role is for a versatile builder at the intersection of Machine Learning Operations, Data Architecture, and Edge Integration. You will support the deployment strategies powering our global platform to ensure seamless performance. We seek an engineer with a genuine passion for Computer Vision. You understand the unique infrastructure challenges of visual data—latency, frame-rate consistency, and processing high-volume streams under compute constraints.
Job Responsibility:
Drive improvements to the core developer platform
Design and build agentic solutions for the platform, such as an AI App Agent Builder
Support robust inference pipelines, model serving architectures, and automated training/monitoring loops at scale
Command AWS resources (e.g., SageMaker, Bedrock) to scale AI workflows globally with maximum reliability
Provide occasional support for edge-hybrid deployment pipelines as needed, with a primary focus on core data processing systems
Requirements:
Software Design Patterns: Production-grade code for system orchestration in Node.js (Backend) and Angular (Frontend)
Cloud Proficiency: Experience with AWS (SageMaker, Bedrock, Greengrass, etc.) to manage high-volume AI workflows at scale
Linux & Containers: Experience with containerization (Docker)
ability to orchestrate is a nice-to-have
Database Fluency: Understanding of DynamoDB is sufficient
Data Focus: Experience with Databricks (Unity Catalog, Delta Lake), expert-level Data Modeling, and distributed computing (Apache Spark)