CrawlJobs Logo

Intermediate Data Engineer

nttdata.com Logo

NTT DATA

Location Icon

Location:
India , Remote

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

The Intermediate Data Engineer role involves designing and implementing data solutions using programming languages like Python, Java, and Scala. Candidates should have at least 4 years of experience in data engineering and be proficient in technologies such as Snowflake and AWS. Strong communication skills and the ability to lead projects are essential. A preferred undergraduate or graduate degree is also required.

Job Responsibility:

  • Design and implement tailored data solutions to meet customer needs and use cases
  • Provide thought leadership by recommending the most appropriate technologies and solutions
  • Demonstrate proficiency in coding skills to efficiently move solutions into production
  • Collaborate seamlessly across diverse technical stacks
  • Develop and deliver detailed presentations to effectively communicate complex technical concepts
  • Generate comprehensive solution documentation
  • Adhere to Agile practices throughout the solution development process
  • Design, build, and deploy databases and data stores to support organizational requirements

Requirements:

  • 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects
  • 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions
  • Proficiency in programming languages such as Python, Java, and Scala
  • Experience with technologies such as Cloudera, Databricks, Snowflake, and AWS
  • Strong communication skills
  • Undergraduate or Graduate degree preferred

Nice to have:

  • Demonstrate production experience in core data platforms such as Snowflake, Databricks, Azure
  • Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems
  • Exhibit a strong understanding of Data integration technologies, encompassing Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc
  • Showcase professional written and verbal communication skills to effectively convey complex technical concepts

Additional Information:

Job Posted:
February 13, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Intermediate Data Engineer

Senior Data Engineer

We’re growing our team at ELEKS in partnership with a company, the UK’s largest ...
Location
Location
Salary
Salary:
Not provided
eleks.com Logo
ELEKS
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience in Data Engineering, SQL, ETL(data validation + data mapping + exception handling) 4+ years
  • Hands-on experience with Databricks 2+ years
  • Experience with Python
  • Experience with AWS (e.g. S3, Redshift, Athena, Glue, Lambda, etc.)
  • At least an Upper-Intermediate level of English
Job Responsibility
Job Responsibility
  • Building Databases and Pipelines: Developing databases, data lakes, and data ingestion pipelines to deliver datasets for various projects
  • End-to-End Solutions: Designing, developing, and deploying comprehensive solutions for data and data science models, ensuring usability for both data scientists and non-technical users. This includes following best engineering and data science practices
  • Scalable Solutions: Developing and maintaining scalable data and machine learning solutions throughout the data lifecycle, supporting the code and infrastructure for databases, data pipelines, metadata, and code management
  • Stakeholder Engagement: Collaborating with stakeholders across various departments, including data platforms, architecture, development, and operational teams, as well as addressing data security, privacy, and third-party coordination
What we offer
What we offer
  • Close cooperation with a customer
  • Challenging tasks
  • Competence development
  • Ability to influence project technologies
  • Team of professionals
  • Dynamic environment with low level of bureaucracy
Read More
Arrow Right

Senior Data Engineer

This project is designed for consulting companies that provide analytics and pre...
Location
Location
Salary
Salary:
Not provided
lightpointglobal.com Logo
Lightpoint Global
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • successfully implemented and released data integration services or APIs using modern Python frameworks in the past 4 years
  • successfully designed data models and schemas for analytics or data warehousing solutions
  • strong analysis and problem solving skills
  • strong knowledge of Python programming language and data engineering
  • deep understanding of good programming practices, design patterns, and software architecture principles
  • ability to work as part of a team by contributing to product backlog reviews and solution design and implementation
  • be disciplined in implementing software in a timely manner while ensuring product quality isn't compromised
  • formal training in software engineering, computer science, computer engineering, or data engineering
  • have working knowledge with Apache Airflow or a similar technology for workflow orchestration
  • have working knowledge with dbt (data build tool) for analytics transformation workflows
Job Responsibility
Job Responsibility
  • work in an agile team to design, develop, and implement data integration services that connect diverse data sources including event tracking platforms (GA4, Segment), databases, APIs, and third-party systems
  • build and maintain robust data pipelines using Apache Airflow, dbt, and Spark to orchestrate complex workflows and transform raw data into analytics-ready datasets in Snowflake
  • develop Python-based integration services and APIs that enable seamless data flow between various data technologies and downstream applications
  • collaborate actively with data analysts, analytics engineers, and platform teams to understand requirements, troubleshoot data issues, and optimize pipeline performance
  • participate in code reviews, sprint planning, and retrospectives to ensure high-quality, production-ready code by end of each sprint
  • contribute to the continuous improvement of data platform infrastructure, development practices, and deployment processes in accordance with CI/CD best practices
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Provectus helps companies adopt ML/AI to transform the ways they operate, compet...
Location
Location
Salary
Salary:
Not provided
provectus.com Logo
Provectus
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in data engineering
  • Experience in AWS
  • Experience handling real-time and batch data flow and data warehousing with tools and technologies like Airflow, Dagster, Kafka, Apache Druid, Spark, dbt, etc.
  • Proficiency in programming languages relevant to data engineering, such as Python and SQL
  • Proficiency with Infrastructure as Code (IaC) technologies like Terraform or AWS CloudFormation
  • Experience in building scalable APIs
  • Familiarity with Data Governance aspects like Quality, Discovery, Lineage, Security, Business Glossary, Modeling, Master Data, and Cost Optimization
  • Upper-Intermediate or higher English skills
  • Ability to take ownership, solve problems proactively, and collaborate effectively in dynamic settings
Job Responsibility
Job Responsibility
  • Collaborate closely with clients to deeply understand their existing IT environments, applications, business requirements, and digital transformation goals
  • Collect and manage large volumes of varied data sets
  • Work directly with ML Engineers to create robust and resilient data pipelines that feed Data Products
  • Define data models that integrate disparate data across the organization
  • Design, implement, and maintain ETL/ELT data pipelines
  • Perform data transformations using tools such as Spark, Trino, and AWS Athena to handle large volumes of data efficiently
  • Develop, continuously test, and deploy Data API Products with Python and frameworks like Flask or FastAPI
What we offer
What we offer
  • Long-term B2B collaboration
  • Paid vacations and sick leaves
  • Public holidays
  • Compensation for medical insurance or sports coverage
  • External and Internal educational opportunities and AWS certifications
  • A collaborative local team and international project exposure
Read More
Arrow Right

Senior Data Engineer

Provectus helps companies adopt ML/AI to transform the ways they operate, compet...
Location
Location
Salary
Salary:
Not provided
provectus.com Logo
Provectus
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in data engineering
  • Experience in AWS
  • Experience handling real-time and batch data flow and data warehousing with tools and technologies like Airflow, Dagster, Kafka, Apache Druid, Spark, dbt, etc.
  • Proficiency in programming languages relevant to data engineering, such as Python and SQL
  • Proficiency with Infrastructure as Code (IaC) technologies like Terraform or AWS CloudFormation
  • Experience in building scalable APIs
  • Familiarity with Data Governance aspects like Quality, Discovery, Lineage, Security, Business Glossary, Modeling, Master Data, and Cost Optimization
  • Upper-Intermediate or higher English skills
  • Ability to take ownership, solve problems proactively, and collaborate effectively in dynamic settings
Job Responsibility
Job Responsibility
  • Collaborate closely with clients to deeply understand their existing IT environments, applications, business requirements, and digital transformation goals
  • Collect and manage large volumes of varied data sets
  • Work directly with ML Engineers to create robust and resilient data pipelines that feed Data Products
  • Define data models that integrate disparate data across the organization
  • Design, implement, and maintain ETL/ELT data pipelines
  • Perform data transformations using tools such as Spark, Trino, and AWS Athena to handle large volumes of data efficiently
  • Develop, continuously test, and deploy Data API Products with Python and frameworks like Flask or FastAPI
What we offer
What we offer
  • Long-term B2B collaboration
  • Paid vacations and sick leaves
  • Public holidays
  • Compensation for medical insurance or sports coverage
  • External and Internal educational opportunities and AWS certifications
  • A collaborative local team and international project exposure
Read More
Arrow Right

Data Engineer

We are looking for an experienced Data Engineer with deep expertise in Databrick...
Location
Location
Salary
Salary:
Not provided
coherentsolutions.com Logo
Coherent Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field
  • 5+ years of experience in data engineering, with at least 2 years of hands-on experience with Databricks (including Spark, Delta Lake, and MLflow)
  • Strong proficiency in Python and/or Scala for data processing
  • Deep understanding of distributed data processing, data warehousing, and ETL concepts
  • Experience with cloud data platforms (Azure Data Lake, AWS S3, or Google Cloud Storage)
  • Solid knowledge of SQL and experience with large-scale relational and NoSQL databases
  • Familiarity with CI/CD, DevOps, and infrastructure-as-code practices for data engineering
  • Experience with data governance, security, and compliance in cloud environments
  • Excellent problem-solving, communication, and leadership skills
  • English: Upper Intermediate level or higher
Job Responsibility
Job Responsibility
  • Lead the design, development, and deployment of scalable data pipelines and ETL processes using Databricks (Spark, Delta Lake, MLflow)
  • Architect and implement data lakehouse solutions, ensuring data quality, governance, and security
  • Optimize data workflows for performance and cost efficiency on Databricks and cloud platforms (Azure, AWS, or GCP)
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights
  • Mentor and guide junior engineers, promoting best practices in data engineering and Databricks usage
  • Develop and maintain documentation, data models, and technical standards
  • Monitor, troubleshoot, and resolve issues in production data pipelines and environments
  • Stay current with emerging trends and technologies in data engineering and Databricks ecosystem
What we offer
What we offer
  • Technical and non-technical training for professional and personal growth
  • Internal conferences and meetups to learn from industry experts
  • Support and mentorship from an experienced employee to help you professional grow and development
  • Internal startup incubator
  • Health insurance
  • English courses
  • Sports activities to promote a healthy lifestyle
  • Flexible work options, including remote and hybrid opportunities
  • Referral program for bringing in new talent
  • Work anniversary program and additional vacation days
Read More
Arrow Right

Senior Data Engineer

Provectus, a leading AI consultancy and solutions provider specializing in Data ...
Location
Location
Salary
Salary:
Not provided
provectus.com Logo
Provectus
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience handling real-time and batch data flow and data warehousing with tools and technologies like Airflow, Dagster, Kafka, Apache Druid, Spark, dbt, etc.
  • Experience in AWS
  • Proficiency in programming languages relevant to data engineering, such as Python and SQL
  • Proficiency with Infrastructure as Code (IaC) technologies like Terraform or AWS CloudFormation
  • Experience in building scalable APIs
  • Familiarity with Data Governance aspects like Quality, Discovery, Lineage, Security, Business Glossary, Modeling, Master Data, and Cost Optimization
  • Upper-Intermediate or higher English skills
  • Ability to take ownership, solve problems proactively, and collaborate effectively in dynamic settings
Job Responsibility
Job Responsibility
  • Collaborate closely with clients to deeply understand their existing IT environments, applications, business requirements, and digital transformation goals
  • Collect and manage large volumes of varied data sets
  • Work directly with ML Engineers to create robust and resilient data pipelines that feed Data Products
  • Define data models that integrate disparate data across the organization
  • Design, implement, and maintain ETL/ELT data pipelines
  • Perform data transformations using tools such as Spark, Trino, and AWS Athena to handle large volumes of data efficiently
  • Develop, continuously test, and deploy Data API Products with Python and frameworks like Flask or FastAPI
What we offer
What we offer
  • Participate in internal training programs (Leadership, Public Speaking, etc.) with full support for AWS and other professional certifications
  • Work with the latest AI tools, premium subscriptions, and the freedom to use them in your daily work
  • Long-term B2B collaboration
  • 100% remote — with flexible hours
  • Collaboration with an international, cross-functional team
  • Comprehensive private medical insurance or budget for your medical needs
  • Paid sick leave, vacation, and public holidays
  • Equipment and all the tech you need for comfortable, productive work
  • Special gifts for weddings, childbirth, and other personal milestones
Read More
Arrow Right

Senior Data Engineer

Provectus, a leading AI consultancy and solutions provider specializing in Data ...
Location
Location
Salary
Salary:
Not provided
provectus.com Logo
Provectus
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experience handling real-time and batch data flow and data warehousing with tools and technologies like Airflow, Dagster, Kafka, Apache Druid, Spark, dbt, etc.
  • Experience in AWS
  • Proficiency in programming languages relevant to data engineering, such as Python and SQL
  • Proficiency with Infrastructure as Code (IaC) technologies like Terraform or AWS CloudFormation
  • Experience in building scalable APIs
  • Familiarity with Data Governance aspects like Quality, Discovery, Lineage, Security, Business Glossary, Modeling, Master Data, and Cost Optimization
  • Upper-Intermediate or higher English skills
  • Ability to take ownership, solve problems proactively, and collaborate effectively in dynamic settings
Job Responsibility
Job Responsibility
  • Collaborate closely with clients to deeply understand their existing IT environments, applications, business requirements, and digital transformation goals
  • Collect and manage large volumes of varied data sets
  • Work directly with ML Engineers to create robust and resilient data pipelines that feed Data Products
  • Define data models that integrate disparate data across the organization
  • Design, implement, and maintain ETL/ELT data pipelines
  • Perform data transformations using tools such as Spark, Trino, and AWS Athena to handle large volumes of data efficiently
  • Develop, continuously test, and deploy Data API Products with Python and frameworks like Flask or FastAPI
What we offer
What we offer
  • Participate in internal training programs (Leadership, Public Speaking, etc.) with full support for AWS and other professional certifications
  • Work with the latest AI tools, premium subscriptions, and the freedom to use them in your daily work
  • Collaboration with an international, cross-functional team
  • Comprehensive private medical insurance or budget for your medical needs
  • Paid sick leave, vacation, and public holidays
  • Equipment and all the tech you need for comfortable, productive work
  • Special gifts for weddings, childbirth, and other personal milestones
Read More
Arrow Right

Senior Data Engineer

As a Senior Data Engineer, you will be pivotal in designing, building, and optim...
Location
Location
United States
Salary
Salary:
102000.00 - 125000.00 USD / Year
wpromote.com Logo
Wpromote
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Information Technology, or a related field, or equivalent practical experience
  • 4+ years of experience in data engineering or a related field
  • Intermediate to advanced programming skills in Python
  • Proficiency in SQL and experience with relational databases
  • Strong knowledge of database and data warehousing design and management
  • Strong experience with DBT (data build tool) and test-driven development practices
  • Proficiency with at least 1 cloud database (e.g. BigQuery, Snowflake, Redshift, etc.)
  • Excellent problem-solving skills, project management habits, and attention to detail
  • Advanced level Excel and Google Sheets experience
  • Familiarity with data orchestration tools (e.g. Airflow, Dagster, AWS Glue, Azure data factory, etc.)
Job Responsibility
Job Responsibility
  • Developing data pipelines leveraging a variety of technologies including dbt and BigQuery
  • Gathering requirements from non-technical stakeholders and building effective solutions
  • Identifying areas of innovation that align with existing company and team objectives
  • Managing multiple pipelines across Wpromote’s client portfolio
What we offer
What we offer
  • Half-day Fridays year round
  • Unlimited PTO
  • Extended Holiday break (Winter)
  • Flexible schedules
  • Work from anywhere options*
  • 100% paid parental leave
  • 401(k) matching
  • Medical, Dental, Vision, Life, Pet Insurance
  • Sponsored life insurance
  • Short Term Disability insurance and additional voluntary insurance
  • Fulltime
Read More
Arrow Right