This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Pod1 - DBRX Engineer role at NTT DATA involves building and maintaining large-scale data lakes, designing ETL/ELT pipelines, and optimizing performance for big data workloads.
Job Responsibility:
Build and maintain large-scale data lakes leveraging Databricks and related technologies
Design and implement ETL/ELT pipelines for structured and unstructured data
Develop processes for data transformation, metadata management, and workload orchestration
Migrate data from traditional Data Warehouse (DWH) systems to data lakes or Snowflake
Ensure smooth integration with Azure services (ADLS, ADF, ADLA, AAS) and other cloud platforms
Optimize query performance and ensure scalability for big data workloads
Implement stream processing and message queuing for real-time data ingestion
Work with cross-functional teams (data scientists, analysts, architects) in a dynamic environment
Provide technical guidance and documentation for stakeholders
Participate in Agile ceremonies and adhere to best practices for iterative development
Requirements:
3+ years in Data Engineering or Software Engineering roles
Strong proficiency in SQL, Python, PySpark, and familiarity with Scala or Java
Experience with Databricks, Snowflake, and workflow management tools
Hands-on experience with Azure cloud services and big data stores
Understanding of ELT vs ETL patterns and when to apply each
Knowledge of data warehousing, analytic models, and schema design
Undergraduate degree in Computer Science, Statistics, or related field (Graduate degree preferred)
Nice to have:
Familiarity with CI/CD pipelines and DevOps practices