This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Become a player in our data engineering team, grow on a personal level and help us build the foundation of our data-driven success! As a Data Engineer, you will be a part of our growing team, working alongside experienced data professionals to design and implement scalable data systems. You will contribute to the development of data pipelines, ensure data quality, and collaborate on various projects that enhance our data capabilities.
Job Responsibility:
Assist in designing, building, and maintaining efficient data pipelines
Work on data modeling tasks to support the creation and maintenance of data warehouses
Integrate data from multiple sources, ensuring data consistency and reliability
Collaborate in implementing and managing data orchestration processes and tools
Help establish monitoring systems to maintain high standards of data quality and availability
Work closely with the Data Architect, Senior Data Engineers, and other members across the organization on various data infrastructure projects
Participate in the optimization of data processes, seeking opportunities to enhance system performance
Requirements:
A university degree, ideally in Computer Science or related science, technology or engineering field
2+ years of relevant work experience in data engineering roles
Experience in data acquisition, laking, warehousing, modeling, and orchestration
Proficiency in SQL (including window functions and CTE)
Proficiency in RDBMS (e.g., MySQL, PostgreSQL)
Strong programming skills in Python (with libraries like Polars, optionally Arrow / PyArrow API)
First exposure to OLAP query engines (e.g., Clickhouse, DuckDB, Apache Spark)
Familiarity with Apache Airflow (or similar tools like Dagster or Prefect)
Strong teamwork and communication skills
Ability to work independently and manage your time effectively
Comfortable working in a diverse, international environment
Nice to have:
Knowledge of common columnar file formats used in data applications
Knowledge in data partitioning and incremental scalability
Knowledge in data quality and data governance
Experience in entity disambiguation
Familiarity with orchestration and containerization technologies (e.g., Docker, Kubernetes)
Knowledge of Linux (Ubuntu/Debian)
Experience with Git and Atlassian tools (Jira, Confluence)
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.