This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Within a dynamic, high-level team, you will contribute to both R&D and client projects, focused on data valorisation, as well as the design and implementation of operational software relying on advanced Optimisation and Machine Learning.
Job Responsibility:
Design and develop innovative and high-performance software solutions addressing industrial challenges, primarily using the Python language and a microservices architecture
Gather user and business needs to design data collection and storage solutions best suited to the presented use cases
Develop technical solutions for data collection, cleaning, and processing, then industrialise and automate them
Contribute to setting up technical architectures based on Data or even Big Data environments
Carry out development work aimed at industrialising and orchestrating computations (statistical and optimisation models) and participate in software testing and qualification
Requirements:
Degree from a top engineering school or a high-level university program
At least 3 years of experience in designing and developing data-driven solutions with high business impact, particularly in industrial or large-scale environments
Excellent command of Python for both application development and data processing, with strong expertise in libraries such as Pandas, Polars, NumPy, and the broader Python Data ecosystem
Experience implementing data processing pipelines using tools like Apache Airflow, Databricks, Dask, or flow orchestrators integrated into production environments
Contributed to large-scale projects combining data analysis, workflow orchestration, back-end development (REST APIs and/or Messaging), and industrialisation, within a DevOps/DevSecOps-oriented framework
Proficient in using Docker for processing encapsulation and deployment
Experience with Kubernetes for orchestrating workloads in cloud-native architectures
Motivated by practical applications of data in socially valuable sectors such as energy, mobility, or health, and thrives in environments where autonomy, rigour, curiosity, and teamwork are valued
Fluency in English and French is required
Nice to have:
Comfortable with managing relational databases (PostgreSQL, MySQL) and non-relational databases (such as MongoDB, Elasticsearch), and know how to integrate them into robust and secure processing chains
Hands-on experience managing large or semi-structured files (CSV, JSON, Parquet) via object storage such as AWS S3, Azure Blob Storage, Google Cloud Storage, or alternatives, in connection with data platforms and distributed processing
What we offer:
Up to 2 days of remote work per week possible
Flexible working hours
Offices located in the city center of each city where we are located
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.