CrawlJobs Logo

DBT Engineer

nttdata.com Logo

NTT DATA

Location Icon

Location:
India , Remote

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

The DBT Engineer role requires 3-6 years of experience in Data Engineering, focusing on dbt and Snowflake. Candidates should possess strong SQL skills and be familiar with ETL processes. Responsibilities include translating Informatica mappings into dbt models, ensuring data quality, and collaborating with cross-functional teams. A bachelor's degree in Computer Science or a related field is required.

Job Responsibility:

  • Translate Informatica mappings, transformations, and business rules into dbt models (SQL) on Snowflake
  • Design and implement staging, core, and mart layers using standard dbt patterns and folder structures
  • Develop and maintain dbt tests (schema tests, data tests, custom tests) to ensure data quality and integrity
  • Implement snapshots, seeds, macros, and reusable components where appropriate
  • Collaborate with Snowflake developers to ensure physical data structures support dbt models efficiently
  • Work with functional teams to ensure functional equivalence between legacy Informatica outputs and new dbt outputs
  • Participate in performance tuning of dbt models and Snowflake queries
  • Integrate dbt with CI/CD pipelines (e.g., Azure DevOps, GitHub Actions) for automated runs and validations
  • Contribute to documentation of dbt models, data lineage, and business rules
  • Participate in defect analysis, bug fixes, and enhancements during migration and stabilization phases

Requirements:

  • 3–6 years of experience in Data Engineering / ETL / DW
  • 1–3+ years working with dbt (Core or Cloud)
  • Strong SQL skills, especially on Snowflake or another modern cloud DW
  • Experience with dbt concepts: models, tests, sources, seeds, snapshots, macros, exposures
  • Prior experience with Informatica (developer-level understanding of mappings/workflows) is highly desirable
  • Understanding of CI/CD practices and integrating dbt into automated pipelines
  • Knowledge of data modeling (dimensional models, SCDs, fact/dimension design)
  • Experience working in offshore delivery with onshore coordination
  • Good communication skills and ability to read/understand existing ETL logic and requirements documentation
  • Bachelor's degree in Computer Science or a related field

Additional Information:

Job Posted:
February 01, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for DBT Engineer

Senior Analytics Engineer

Our vision is to build the next generation platform to enable easy and fast crea...
Location
Location
United States , San Francisco
Salary
Salary:
170000.00 - 208000.00 USD / Year
descript.com Logo
Descript
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience in Analytics Engineering, Data Engineering, or Data Analytics
  • Proficiency in SQL, Python, and dbt
  • Strong communication skills and the ability to translate business needs into tractable work items
  • Extensive experience with prosumer and/or enterprise SaaS products
  • Curiosity, savviness to navigate in a dynamic environment, and a growth mindset
Job Responsibility
Job Responsibility
  • Design, develop, and optimize robust data models to facilitate seamless reporting, analysis, and experimentation for marketing, product, and finance teams
  • Spearhead the creation of advanced marketing pipelines, ensuring data flows efficiently and accurately to support customer segmentation, campaign optimization, and ROI analysis
  • Collaborate closely with cross-functional teams to refine metrics and build intuitive dashboards that empower stakeholders with actionable insights
  • Standardize, monitor, and document data assets, ensuring data integrity and consistency throughout our analytics infrastructure
What we offer
What we offer
  • generous healthcare package
  • 401k matching program
  • catered lunches
  • flexible vacation time
  • Fulltime
Read More
Arrow Right

Analytics Engineer

We are currently looking for an Analytics Engineer to join a fast-growing, innov...
Location
Location
United Kingdom
Salary
Salary:
65000.00 - 75000.00 GBP / Year
dataidols.com Logo
Data Idols
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2+ years’ experience as a Data Analyst, Data Engineer, or Analytics Engineer
  • dbt
  • Advanced SQL skills and experience with data visualisation tools (Tableau preferred)
  • Knowledge of data modelling, warehousing, and analytics best practices
  • Strong communication skills with the ability to explain technical findings clearly
Job Responsibility
Job Responsibility
  • Design, build, and maintain data models and pipelines
  • Create engaging dashboards and visualisations to present findings to non-technical audiences
  • Collaborate with stakeholders to translate business needs into data-driven outcomes
  • Use analytics to uncover trends, opportunities, and risks that shape company strategy
  • Champion data best practices and innovation within the wider team
What we offer
What we offer
  • Remote working
  • L&D budget
  • Bonus
  • £2,500 personal development budget for certifications, training, and learning
  • Health insurance (where applicable)
  • Fulltime
Read More
Arrow Right

Senior BI Engineer

LumApps is now more than just an Employee Experience Platform — it is an AI-powe...
Location
Location
France , Tassin-la-Demi-Lune; Sophia-Antipolis; Paris
Salary
Salary:
Not provided
lumapps.com Logo
LumApps
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Engineering proficiency: Strong command of SQL and experience with dbt
  • Visualization savvy: Experience with Looker (or similar enterprise BI tools) and an understanding of data modeling
  • Language skills: You are fluent in both French and English
  • Process oriented: You are comfortable using Jira/Git and writing clear documentation
Job Responsibility
Job Responsibility
  • Build the Foundation: Design and develop robust data transforms (ETL/ELT) within the group Data Lake using dbt and SQL
  • Scale through M&A: Analyze the reporting structures of companies acquired by LumApps and design efficient strategies to integrate their data into our ecosystem
  • Connect the Dots: Create interfaces between the Data Lake and new data sources (via Meltano or custom scripts)
  • Collaborate & Clarify: Translate functional needs from BI Analysts and Business Owners into rigorous technical specifications
  • Guarantee Quality: Implement automated data validation systems and manage access rights, ensuring our stakeholders always trust the numbers
What we offer
What we offer
  • Hybrid work model – 2 days at the office, 3 days remote
  • RTT days – ~10 extra days off per year
  • Meal vouchers (SWILE) + free snacks & coffee
  • Yoga classes
  • Supportive parental leave and family moments
  • Health insurance (ALAN) – 60% covered + full life & disability cover
  • Afterworks, team celebrations & seasonal parties
  • Equipment of your choice
  • French & English lessons, professional development & access to Leeto CSE
  • Fulltime
Read More
Arrow Right

Data Engineer

Join Rackner to modernize secure data infrastructure for a critical defense heal...
Location
Location
United States
Salary
Salary:
Not provided
rackner.com Logo
Rackner
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years of experience in Data Engineering or Analytics Engineering
  • Proficient in SQL and Python, with experience building modular, version-controlled transformations
  • Hands-on with Apache Airflow, Apache Spark, dbt, and data lake frameworks (Iceberg, Trino, Athena)
  • Strong understanding of ETL/ELT design, data modeling, and governance in distributed systems
  • Familiar with AWS (Glue, S3, Lambda, Athena, EMR) and cloud-native data architectures
  • Excellent collaborator with experience in cross-functional environments (DevSecOps, Data Science, Security)
  • Active DoD Secret Clearance (or higher)
  • U.S. Based
Job Responsibility
Job Responsibility
  • Build and maintain end-to-end Ingest → Transform → Expose pipelines using Airflow, Spark, dbt, and Iceberg
  • Ingest and normalize structured and unstructured data (HL7, FHIR, PDF, JSON) for analytics and AI/ML use cases
  • Map datasets to FHIR and OMOP standards to enable interoperability and decision support
  • Implement schema versioning and governance to ensure traceability and audit-ready lineage
  • Collaborate with DevSecOps and Data Science teams to deliver AI-ready datasets for predictive analytics and readiness forecasting
  • Optimize data performance across distributed environments while ensuring compliance with DoD Responsible AI and NIST AI Risk Management frameworks
What we offer
What we offer
  • Weekly Pay and Full Remote Flexibility
  • Professional Growth through paid certifications and training
  • Comprehensive Benefits including 401(k) with 100% match up to 6%, PTO, medical/dental/vision, life & disability insurance
  • Home office equipment plan and supportive, inclusive team culture with mission impact
Read More
Arrow Right

Senior Data Engineer

We build simple yet innovative consumer products and developer APIs that shape h...
Location
Location
United States , San Francisco
Salary
Salary:
180000.00 - 270000.00 USD / Year
plaid.com Logo
Plaid
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years of dedicated data engineering experience, solving complex data pipelines issues at scale
  • Experience building data models and data pipelines on top of large datasets (in the order of 500TB to petabytes)
  • Value SQL as a flexible and extensible tool, and are comfortable with modern SQL data orchestration tools like DBT, Mode, and Airflow
  • Experience working with different performant warehouses and data lakes
  • Redshift, Snowflake, Databricks
  • Experience building and maintaining batch and realtime pipelines using technologies like Spark, Kafka
  • Appreciate the importance of schema design, and can evolve an analytics schema on top of unstructured data
  • Excited to try out new technologies and like to produce proof-of-concepts that balance technical advancement and user experience and adoption
  • Like to get deep in the weeds to manage, deploy, and improve low level data infrastructure
  • Empathetic working with stakeholders
Job Responsibility
Job Responsibility
  • Understanding different aspects of the Plaid product and strategy to inform golden dataset choices, design and data usage principles
  • Have data quality and performance top of mind while designing datasets
  • Leading key data engineering projects that drive collaboration across the company
  • Advocating for adopting industry tools and practices at the right time
  • Owning core SQL and python data pipelines that power our data lake and data warehouse
  • Well-documented data with defined dataset quality, uptime, and usefulness
What we offer
What we offer
  • medical
  • dental
  • vision
  • 401(k)
  • equity
  • commission
  • Fulltime
Read More
Arrow Right

Data Engineer

We build simple yet innovative consumer products and developer APIs that shape h...
Location
Location
United States , San Francisco
Salary
Salary:
163200.00 - 223200.00 USD / Year
plaid.com Logo
Plaid
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2+ years of dedicated data engineering experience, solving complex data pipeline issues at scale
  • Experience building data models and data pipelines on top of large datasets (in the order of 500TB to petabytes)
  • Value SQL as a flexible and extensible tool and are comfortable with modern SQL data orchestration tools like DBT, Mode, and Airflow
Job Responsibility
Job Responsibility
  • Understanding different aspects of the Plaid product and strategy to inform golden dataset choices, design and data usage principles
  • Have data quality and performance top of mind while designing datasets
  • Advocating for adopting industry tools and practices at the right time
  • Owning core SQL and Python data pipelines that power our data lake and data warehouse
  • Well-documented data with defined dataset quality, uptime, and usefulness
What we offer
What we offer
  • medical, dental, vision, and 401(k)
  • Fulltime
Read More
Arrow Right

Staff Data Engineer

We are seeking a Staff Data Engineer to architect and lead our entire data infra...
Location
Location
United States , New York; San Francisco
Salary
Salary:
170000.00 - 210000.00 USD / Year
taskrabbit.com Logo
Taskrabbit
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7-10 years of experience in Data Engineering
  • Expertise in building and maintaining ELT data pipelines using modern tools such as dbt, Airflow, and Fivetran
  • Deep experience with cloud data warehouses such as Snowflake, BigQuery, or Redshift
  • Strong data modeling skills (e.g., dimensional modeling, star/snowflake schemas) to support both operational and analytical workloads
  • Proficient in SQL and at least one general-purpose programming language (e.g., Python, Java, or Scala)
  • Experience with streaming data platforms (e.g., Kafka, Kinesis, or equivalent) and real-time data processing patterns
  • Familiarity with infrastructure-as-code tools like Terraform and DevOps practices for managing data platform components
  • Hands-on experience with BI and semantic layer tools such as Looker, Mode, Tableau, or equivalent
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable, reliable data pipelines and infrastructure to support analytics, operations, and product use cases
  • Develop and evolve dbt models, semantic layers, and data marts that enable trustworthy, self-serve analytics across the business
  • Collaborate with non-technical stakeholders to deeply understand their business needs and translate them into well-defined metrics and analytical tools
  • Lead architectural decisions for our data platform, ensuring it is performant, maintainable, and aligned with future growth
  • Build and maintain data orchestration and transformation workflows using tools like Airflow, dbt, and Snowflake (or equivalent)
  • Champion data quality, documentation, and observability to ensure high trust in data across the organization
  • Mentor and guide other engineers and analysts, promoting best practices in both data engineering and analytics engineering disciplines
What we offer
What we offer
  • Employer-paid health insurance
  • 401k match with immediate vesting
  • Generous and flexible time off with 2 company-wide closure weeks
  • Taskrabbit product stipends
  • Wellness + productivity + education stipends
  • IKEA discounts
  • Reproductive health support
  • Fulltime
Read More
Arrow Right

Senior Analytics Engineer

Senior Analytics Engineer role at Fever, building a federated data organization ...
Location
Location
Spain , Madrid
Salary
Salary:
Not provided
https://feverup.com/fe Logo
Fever
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's, Master's, or PhD in Computer Engineering, Data Engineering, Data Science, or related field
  • Strong experience with SQL and data modeling (star/snowflake schemas, data vault, or similar)
  • Hands-on experience with DBT (or similar transformation frameworks)
  • Hands-on experience with Python and orchestration frameworks (Airflow, Dagster, Prefect)
  • Familiarity with modern cloud data warehouses (Snowflake, BigQuery, Redshift, etc.)
  • Experience with BI tools (Metabase, Superset, Tableau, etc.)
  • Understanding of data quality, governance, and observability practices
  • Collaborative mindset comfortable working with both engineers and business stakeholders
  • Strong communication skills adaptable to multidisciplinary, international, fast-paced environment
Job Responsibility
Job Responsibility
  • Design, build, and maintain data models (DBT, SQL) that transform raw data into clean, trusted datasets
  • Collaborate with data engineering team to define and certify business-critical metrics
  • Work with business squads (B2B, Marketing, CRM, Product) to understand their needs and turn them into reusable data assets
  • Ensure data quality and consistency through testing frameworks, observability, and governance
  • Support self-service analytics by enabling stakeholders to explore data confidently
  • Contribute to Airflow pipelines in Python for automation and orchestration
  • Contribute to data mesh vision creating domain-owned datasets
What we offer
What we offer
  • Attractive compensation package with base salary and performance bonus
  • Stock options
  • 40% discount on all Fever events and experiences
  • Home office friendly
  • Health insurance
  • Flexible remuneration with 100% tax exemption through Cobee
  • English Lessons
  • Gympass Membership
  • Possibility to receive salary in advance by Payflow
  • Fulltime
Read More
Arrow Right