This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Data is one of the fundamental pieces of Krea. Huge amounts of data power our AI training pipelines, our analytics and observability, and many of the core systems that make Krea tick.
Job Responsibility:
Build distributed systems to process gigantic (petabytes) amounts of files of all kinds (images, video, and even 3D data)
Work closely with our research team to build ML pipelines and deploy models to make sense of raw data
Play with massive amounts of compute on huge kubernetes GPU clusters
Learn machine learning engineering
Requirements:
Python
PyArrow
DuckDB
SQL
massive relational databases
PyTorch
Pandas
NumPy
Kubernetes
Designing and implementing large-scale ETL systems
Fundamental knowledge of containerization, operating systems, file-systems, and networking