This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Design, build, and optimize end-to-end ETL/ELT data pipelines using Teradata SQL, BTEQ, TPT, and FastExport
Translate business requirements into robust technical specifications, data mappings, and transformation logic
Implement incremental loads, SCD strategies, and reconciliation checks to ensure data completeness and accuracy
Develop reusable modules, parameterized scripts, and standards for code consistency
Model data structures (staging, ODS, dimensional models) aligned to Teradata best practices
Requirements:
10+ years’ experience with end-to-end ETL and Analytics application development on Teradata-based data-warehouse and analytical platforms
Extensive experience developing Teradata SQL-based ETL and analytic workflows using native utilities (bteq, tpt, fastexport)
Very good knowledge of Unix/Linux shell Scripting and scheduling (like Autosys)
Knowledge and experience working with CI / CD based development and deployment – JIRA, BitBucket
Experience working with Big Data Technologies, programs and toolsets like Hadoop, Hive, Sqoop, Impala, Kafka, and Python/Spark/PySpark workloads will be a plus
Excellent written, communication and diagramming skills
Strong analytical and problem-solving abilities
Speaking / presentation skills in a professional setting
Excellent interpersonal skills and a team player to work all along with Global teams and business partners
Positive attitude and flexible
Willingness to learn new skills and adapt to changes
Nice to have:
Experience working with Big Data Technologies, programs and toolsets like Hadoop, Hive, Sqoop, Impala, Kafka, and Python/Spark/PySpark workloads will be a plus