This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are looking for a skilled Data Test Engineer who can design, build, and validate end-to-end data pipelines. In this role, you will work closely with data engineers and business teams to ensure that data is accurate, complete, and reliable. You will be responsible for testing data workflows, writing complex SQL queries, automating data quality checks, and integrating validations into CI/CD pipelines. If you have a strong background in data engineering and a keen eye for quality, we’d love to have you on our team.
Job Responsibility:
Design, develop, and maintain robust ETL/ELT pipelines to process large volumes of structured and unstructured data using Azure Data Factory, PySpark, and SQL-based tools
Collaborate with data architects and analysts to understand transformation requirements and implement business rules correctly
Develop and execute complex SQL queries to validate, transform, and performance-tune data workflows
Perform rigorous data validation including source-to-target mapping (S2T), data profiling, reconciliation, and transformation rule testing
Conduct unit, integration, regression, and performance testing for data pipelines and storage layers
Automate data quality checks using Python and frameworks like Great Expectations, DBT, or custom-built tools
Monitor data pipeline health and implement observability through logging, alerting, and dashboards
Integrate testing into CI/CD workflows using tools like Azure DevOps, Jenkins, or GitHub Actions
Troubleshoot and resolve data quality issues, schema changes, and pipeline failures
Ensure compliance with data privacy, security, and governance policies
Maintain thorough documentation for data flows, test logic, and validation processes
Requirements:
4+ years of experience in Data Engineering and Data/ETL Testing
Strong expertise in writing and optimizing SQL queries (joins, subqueries, window functions, performance tuning)
Proficiency in Python or PySpark for data transformation and automation
Hands-on experience with ETL tools such as Azure Data Factory, Talend, SSIS, or Informatica
Familiarity with cloud platforms, preferably Azure
AWS or GCP is a plus
Experience working with data lakes, data warehouses (Snowflake, BigQuery, Redshift), and modern data platforms
Knowledge of version control systems (Git), issue tracking tools (JIRA), and Agile methodologies
Exposure to data testing frameworks like Great Expectations, DBT tests, or custom validation tools
Experience integrating data testing into CI/CD pipelines
Nice to have:
Familiarity with Airflow, Databricks, BI tools (Power BI, Tableau), and metadata management practices
What we offer:
Competitive salary aligned with industry standards
Hands-on experience with enterprise-scale data platforms and cloud-native tools
Opportunities to work on data-centric initiatives across AI, analytics, and enterprise transformation
Access to internal learning accelerators, mentorship, and career growth programs
Flexible work culture, wellness initiatives, and comprehensive health benefits
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.