This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a Data Engineer with a systems mindset to own and simplify access to the massive amount of data generated by our fleet of autonomous drones. You will play a key role in improving how engineers, researchers, and product stakeholders interact with logs, video, sensor data, and derived analytics, making it easier to extract insights and build better autonomy features.
Job Responsibility:
Design systems to unify scattered data sources (logs, telemetry, analytics tables, media, etc.) into easily discoverable and queryable formats
Enable efficient curation of machine learning datasets by tagging, indexing, and filtering for relevant scenarios (e.g., environmental conditions, sensor behavior, scene attributes)
Build tools to automatically surface anomalies, regressions, or key signatures in logs and telemetry data (e.g., CPU usage spikes, sensor noise, degraded conditions)
Develop mechanisms to rapidly compare releases and surface regressions in performance metrics, resource usage, and data quality
Architect and maintain scalable data pipelines and services to index, enrich, and query multimodal autonomy data (e.g., time series, media, tabular analytics)
Collaborate with autonomy and ML teams to understand data usage patterns and build tools that streamline their workflows
Develop efficient methods for search, tagging, and filtering over structured and unstructured data
Help design systems to label and retrieve rare or complex scenarios, both automatically at ingestion and via manual search
Build dashboards and visualizations to support release monitoring and anomaly detection across a variety of system health signals
Requirements:
3+ years of experience in data engineering, backend engineering, or infrastructure roles
Exposure to robotics, autonomy, or real-world sensor data pipelines
Strong proficiency in Python (or similar language) and SQL
Experience designing scalable data pipelines with tools such as Apache Spark, Airflow, dbt, or equivalent
Familiarity with log processing, time-series analysis, or working with large volumes of semi-structured data
Ability to work cross-functionally with ML engineers, autonomy engineers, and product stakeholders
Systems thinking: you enjoy untangling complexity and designing elegant abstractions that empower others
Nice to have:
Experience building end-to-end log or telemetry analysis tools that leverage LLMs or natural language interfaces to enable intuitive querying, anomaly detection, or insight extraction
Knowledge of computer vision, scene understanding, or machine learning workflows for data curation
Experience working with robotics, sensor data, or computer vision pipelines
What we offer:
Equity in the form of stock options
Comprehensive benefits packages
Relocation assistance may also be provided for eligible roles
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.