This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are looking for a Senior Data Engineer who will play a key role in designing, building, and maintaining data ingestion frameworks and scalable data pipelines. The ideal candidate should have strong expertise in platform architecture, data modeling, and cloud-based data solutions to support real-time and batch processing needs.
Job Responsibility:
Design, develop, and optimise DBT models to support scalable data transformations
Architect and implement modern ELT pipelines using DBT and orchestration tools like Apache Airflow and Prefect
Lead performance tuning and query optimization for DBT models running on Snowflake, Redshift, or Databricks
Integrate DBT workflows & pipelines with AWS services (S3, Lambda, Step Functions, RDS, Glue) and event-driven architectures
Implement robust data ingestion processes from multiple sources, including manufacturing execution systems (MES), Manufacturing stations, and web applications
Manage and monitor orchestration tools (Airflow, Prefect) for automated DBT model execution
Implement CI/CD best practices for DBT, ensuring version control, automated testing, and deployment workflows
Troubleshoot data pipeline issues and provide solutions for optimizing cost and performance
Requirements:
5+ years of hands-on experience with DBT, including model design, testing, and performance tuning
5+ years of Strong SQL expertise with experience in analytical query optimization and database performance tuning
5+ years of programming experience, especially in building custom DBT macros, scripts, APIs, working with AWS services using boto3
3+ years of Experience with orchestration tools like Apache Airflow, Prefect for scheduling DBT jobs
Hands-on experience in modern cloud data platforms like Snowflake, Redshift, Databricks, or Big Query
Experience with AWS data services (S3, Lambda, Step Functions, RDS, SQS, CloudWatch)
Familiarity with serverless architectures and infrastructure as code (CloudFormation/Terraform)
Ability to effectively communicate timelines and deliver MVPs set for the sprint
Strong analytical and problem-solving skills, with the ability to work across cross-functional teams
Nice to have:
Experience in hardware manufacturing data processing
Contributions to open-source data engineering tools
Knowledge of Tableau or other BI tools for data visualization
Understanding of front-end development (React, JavaScript, or similar) to collaborate effectively with UI teams or build internal tools for data visualization
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.