This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
You will tackle end to end problems that make our AI models work better on robots. You might add new functionality to our video processing data pipeline, then update our ML data loader, then train some models to validate your change, then test those changes in the real world on a robot. This requires stringing together many distributed python services to accomplish a given data processing, or application processing task. It also requires marshaling large quantities of cloud infrastructure to process this business logic efficiently at scale.
Job Responsibility:
Designing and implementing any new idea that can help make our entire system more robust, scalable, or faster
Overhauling existing systems and services to handle the next 10x of scale
Writing the business logic that gets our robot the data it needs, or the business logic that gives our customers the right access to our robots
Requirements:
Extensive experience building complex distributed applications or data pipelines at scale
Experience processing petabytes of data (bonus if it’s video data)
Expertise in python, basic distributed infrastructure skills, and solid modern ML fundamentals
Solid foundation in modern ML techniques and experienced large scale ML training and production deployments
Experience with distributed cloud infrastructure and a solid understanding of cloud networking, permissions, and container orchestration (Kubernetes)