This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Senior Data Engineer at NTT DATA will be responsible for designing and maintaining cloud-based data platforms, ensuring high data quality and compliance. The role requires strong expertise in AWS, GCP, or Azure, along with proficiency in Python and SQL. A bachelor’s or master’s degree in a related field is required, along with 5+ years of experience in data engineering. The candidate will lead the development of scalable data solutions and mentor junior engineers.
Job Responsibility:
Design, build, and maintain scalable and secure data pipelines on cloud platforms (AWS/GCP/Azure)
Architect and implement data lake, data warehouse, and data mart solutions for enterprise workloads
Develop and optimize ETL/ELT pipelines using modern tools and cloud-native services
Work closely with Data Analysts, Data Scientists, and Business teams to support analytical and ML use cases
Ensure high data quality, governance, security, and compliance across all data systems
Implement best practices in data modeling, data partitioning, orchestration, and workflows
Lead performance tuning, cost optimization, and reliability improvements for cloud data pipelines
Build CI/CD pipelines and infrastructure as code (IaC) for data platform automation
Evaluate and introduce new technologies and frameworks to enhance the data ecosystem
Mentor junior engineers and lead technical discussions within the data team
Requirements:
Bachelor’s or master’s in computer science, Engineering, Data Science, or related fields
5+ years of experience in Data Engineering or related fields
Strong expertise with one or more major cloud platforms: AWS: Redshift, Glue, S3, EMR, Kinesis, Lambda
GCP: BigQuery, Dataflow, Pub/Sub, Dataproc, GCS
Azure: Synapse, ADLS, Data Factory, Databricks
Proficient in Python, SQL, and at least one distributed processing framework (Spark, Beam, Databricks)
Strong experience with ETL/ELT pipelines and data workflow orchestration tools (Airflow, Cloud Composer, Dagster, Prefect)
Experience with IaC (Terraform, CloudFormation, CDK)
Excellent understanding of data modeling (OLTP, OLAP, star schema, snowflake schema)
Familiarity with DevOps and CI/CD for data engineering
Strong problem‑solving skills and ability to work in a fast‑paced environment