CrawlJobs Logo

Data Modeling Engineer

aramark.com Logo

Aramark

Location Icon

Location:
United States , Philadelphia, PA or Rockville, MD

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

85000.00 - 100000.00 USD / Year

Job Description:

The Data Modeling Engineer builds dimensional models and Business Vault structures in Snowflake based on approved architectural designs and specifications. This SQL-first role applies approved business logic and KPI definitions to produce performant, documented, and governed data products that power BI, analytics, and AI use cases. The role partners closely with data engineering teams on upstream Data Vault inputs and with BI teams using Power BI to ensure accurate, reliable, and consumption-ready modeled data. Architectural design decisions are provided; this role focuses on implementation and build execution within Snowflake.

Job Responsibility:

  • Build architect-approved star schemas and subject-area dimensional models
  • Implement bridge tables resolve many-to-many relationships, hierarchies, and allocation logic prior to or within dimensional modeling
  • Apply established dimensional modeling patterns including surrogate keys, SCD Type 1/2, conformed dimensions, and approved bridge-table techniques
  • Implement approved KPI definitions, filters, and business rules into governed semantic views
  • Deliver business-friendly semantic layers optimized for Power BI consumption, standardizing metrics and hiding technical complexity
  • Apply Snowflake governance controls including data masking, sensitivity tagging, and (where applicable) row access policies
  • Ensure all modeled objects are documented using Snowflake comments and metadata standards
  • Validate modeled outputs for accuracy, completeness, and reconciliation to source-of-truth totals
  • Tune SQL for performance and cost efficiency, applying Snowflake best practices
  • Support SDLC and SOC 1 evidence requirements through clear documentation and controlled change processes
  • Support scheduled refreshes and dependencies using Snowflake-native orchestration or approved tooling
  • Monitor modeled layers and resolve data, logic, or performance issues affecting downstream consumers
  • Partner with upstream teams to triage and resolve source data issues when needed
  • Partner with BI developers to ensure semantic layers answer business questions without additional tool-side modeling
  • Collaborate with Data Engineers to align on grain, keys, freshness windows, and data contracts
  • Participate in sprint planning, estimation, and delivery ceremonies
  • communicate status and risks proactively
  • Use AI-assisted development tools to accelerate SQL development, refactoring, optimization, and documentation
  • Maintain full accountability for correctness, governance, and semantic integrity of all AI-assisted outputs

Requirements:

  • Strong hands on experience with Snowflake
  • Familiarity and comfortability with Data Vault 2.0 methodology
  • Advanced SQL expertise, including complex joins, window functions, and performance tuning
  • Strong understanding of dimensional modeling concepts including SCDs, surrogate keys, and conformed dimensions
  • Experience encoding business logic and KPI definitions into curated semantic views
  • Experience supporting semantic models and datasets consumed by Power BI and other BI tools
  • Familiarity with Snowflake governance features including data masking and sensitivity tagging
  • Experience supporting production data models through SDLC and incident resolution
  • Strong communication and documentation skills

Nice to have:

  • Experience optimizing Snowflake performance and cost
  • Experience optimizing datasets for Power BI performance and usability
  • Exposure to BI and AI consumption patterns requiring clear semantics and business-friendly metadata
  • Familiarity with Git-based workflows and Azure DevOps
  • Experience contributing to a centralized semantic or data modeling function
What we offer:
  • medical
  • dental
  • vision
  • work/life resources
  • retirement savings plans like 401(k)
  • paid days off such as parental leave and disability coverage

Additional Information:

Job Posted:
February 20, 2026

Employment Type:
Fulltime
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Modeling Engineer

Snr. Data Engineer and Modeler

Design and implement data pipelines. Develop and maintain data models and archit...
Location
Location
Japan , Tokyo
Salary
Salary:
7000000.00 - 13000000.00 JPY / Year
https://www.randstad.com Logo
Randstad
Expiration Date
March 23, 2026
Flip Icon
Requirements
Requirements
  • 5+ years of experience in data engineering/modelling roles
  • Hands-on experience with BI tools (eg. Tableau, PowerBI) and data storage products like databricks, synapse, or snowflake
  • Strong in sql and python (5+ hands-on experience)
Job Responsibility
Job Responsibility
  • Design and implement data pipelines
  • Develop and maintain data models and architecture
  • Ensure data quality
What we offer
What we offer
  • 健康保険
  • 厚生年金保険
  • 雇用保険
  • 土曜日
  • 日曜日
  • 祝日
  • 賞与
  • Fulltime
Read More
Arrow Right

Senior Data Engineer – Data Engineering & AI Platforms

We are looking for a highly skilled Senior Data Engineer (L2) who can design, bu...
Location
Location
India , Chennai, Madurai, Coimbatore
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong hands-on expertise in cloud ecosystems (Azure / AWS / GCP)
  • Excellent Python programming skills with data engineering libraries and frameworks
  • Advanced SQL capabilities including window functions, CTEs, and performance tuning
  • Solid understanding of distributed processing using Spark/PySpark
  • Experience designing and implementing scalable ETL/ELT workflows
  • Good understanding of data modeling concepts (dimensional, star, snowflake)
  • Familiarity with GenAI/LLM-based integration for data workflows
  • Experience working with Git, CI/CD, and Agile delivery frameworks
  • Strong communication skills for interacting with clients, stakeholders, and internal teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL/ELT pipelines across cloud and big data platforms
  • Contribute to architectural discussions by translating business needs into data solutions spanning ingestion, transformation, and consumption layers
  • Work closely with solutioning and pre-sales teams for technical evaluations and client-facing discussions
  • Lead squads of L0/L1 engineers—ensuring delivery quality, mentoring, and guiding career growth
  • Develop cloud-native data engineering solutions using Python, SQL, PySpark, and modern data frameworks
  • Ensure data reliability, performance, and maintainability across the pipeline lifecycle—from development to deployment
  • Support long-term ODC/T&M projects by demonstrating expertise during technical discussions and interviews
  • Integrate emerging GenAI tools where applicable to enhance data enrichment, automation, and transformations
What we offer
What we offer
  • Opportunity to work at the intersection of Data Engineering, Cloud, and Generative AI
  • Hands-on exposure to modern data stacks and emerging AI technologies
  • Collaboration with experts across Data, AI/ML, and cloud practices
  • Access to structured learning, certifications, and leadership mentoring
  • Competitive compensation with fast-track career growth and visibility
  • Fulltime
Read More
Arrow Right

Senior AWS Data Engineer / Data Platform Engineer

We are seeking a highly experienced Senior AWS Data Engineer to design, build, a...
Location
Location
United Arab Emirates , Dubai
Salary
Salary:
Not provided
northbaysolutions.com Logo
NorthBay
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of experience in data engineering and data platform development
  • Strong hands-on experience with: AWS Glue
  • Amazon EMR (Spark)
  • AWS Lambda
  • Apache Airflow (MWAA)
  • Amazon EC2
  • Amazon CloudWatch
  • Amazon Redshift
  • Amazon DynamoDB
  • AWS DataZone
Job Responsibility
Job Responsibility
  • Design, develop, and optimize scalable data pipelines using AWS native services
  • Lead the implementation of batch and near-real-time data processing solutions
  • Architect and manage data ingestion, transformation, and storage layers
  • Build and maintain ETL/ELT workflows using AWS Glue and Apache Spark on EMR
  • Orchestrate complex data workflows using Apache Airflow (MWAA)
  • Develop and manage serverless data processing using AWS Lambda
  • Design and optimize data warehouses using Amazon Redshift
  • Implement and manage NoSQL data models using Amazon DynamoDB
  • Utilize AWS DataZone for data governance, cataloging, and access management
  • Monitor, log, and troubleshoot data pipelines using Amazon CloudWatch
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Atlassian is looking for a Senior Data Engineer to join their product DE team. T...
Location
Location
United States , Seattle; San Francisco; Mountain View
Salary
Salary:
135600.00 - 217800.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Partner across engineering teams to tackle company-wide initiatives
  • Mentor junior members of the team
  • Partner with leadership, engineers, program managers and data scientists to understand data needs
  • Apply expertise and build scalable data solution
  • Develop and launch efficient & reliable data pipelines to move and transform data (both large and small amounts)
  • Intelligently design data models for storage and retrieval
  • Deploy data quality checks to ensure high quality of data
  • Ownership of the end-to-end data engineering component of the solution
  • Support on-call shift to support the team
  • Design and develop new systems in partnership with software engineers to enable quick and easy consumption of data
Job Responsibility
Job Responsibility
  • Build top-notch data solutions and data architecture to inform our most critical strategic and real-time decisions
  • Help translate business needs into data requirements and identify efficiency opportunities
What we offer
What we offer
  • Health coverage
  • Paid volunteer days
  • Wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a Senior Data Engineer to join our product DE team and report...
Location
Location
United States , Seattle; San Francisco; Mountain View
Salary
Salary:
135600.00 - 217800.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Partner across engineering teams to tackle company-wide initiatives
  • Mentor junior members of the team
  • Partner with leadership, engineers, program managers and data scientists to understand data needs
  • Apply expertise and build scalable data solution
  • Develop and launch efficient & reliable data pipelines to move and transform data (both large and small amounts)
  • Intelligently design data models for storage and retrieval
  • Deploy data quality checks to ensure high quality of data
  • Ownership of the end-to-end data engineering component of the solution
  • Support on-call shift as needed to support the team
  • Design and develop new systems in partnership with software engineers to enable quick and easy consumption of data
Job Responsibility
Job Responsibility
  • Build top-notch data solutions and data architecture to inform our most critical strategic and real-time decisions
  • Help translate business needs into data requirements and identify efficiency opportunities
What we offer
What we offer
  • Health coverage
  • Paid volunteer days
  • Wellness resources
  • Fulltime
Read More
Arrow Right

Data Engineer

As a Data Engineer at Rearc, you'll contribute to the technical excellence of ou...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
rearc.io Logo
Rearc
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2+ years of experience in data engineering, data architecture, or related fields
  • Solid track record of contributing to complex data engineering projects
  • Hands-on experience with ETL processes, data warehousing, and data modelling tools
  • Good understanding of data integration tools and best practices
  • Familiarity with cloud-based data services and technologies (e.g., AWS Redshift, Azure Synapse Analytics, Google BigQuery)
  • Strong analytical skills
  • Proficiency in implementing and optimizing data pipelines using modern tools and frameworks
  • Strong communication and interpersonal skills
Job Responsibility
Job Responsibility
  • Collaborate with Colleagues to understand customers' data requirements and challenges
  • Apply DataOps Principles to create scalable and efficient data pipelines and architectures
  • Support Data Engineering Projects
  • Promote Knowledge Sharing through technical blogs and articles
Read More
Arrow Right

Data Modeller

Join us as a Data Modeller at Barclays, where you'll spearhead the evolution of ...
Location
Location
India , Pune
Salary
Salary:
Not provided
barclays.co.uk Logo
Barclays
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Partner with business stakeholders to understand their data needs and their desired functionality for the data product. Translate these requirements into clear data modelling specifications
  • Designs and implements high-quality data models that optimize data access, storage, and analysis as well as ensuring alignment to BCDM
  • Creates comprehensive and well-maintained documentation of the data models, including entity relationship diagrams, data dictionaries, and usage guidelines
  • Collaborates with data engineers to test and validate the data models
  • Obtains sign-off from the DPL, DPA and the Technical Product Lead on the logical and physical data models
  • Continuously monitor and optimise the performance of the models and data solutions to ensure efficient data retrieval & processing
  • Collaborates with data engineers to translate into physical data models and throughout the development lifecycle
  • Manages the business as usual ‘BAU’ data model and solution covering enhancements and changes
  • Helps DSA in defining the legacy estate migration and decommissioning roadmap for the assigned BUK data domain
  • You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills
Job Responsibility
Job Responsibility
  • Investigation and analysis of data issues related to quality, lineage, controls, and authoritative source identification, documenting data sources, methodologies, and quality findings with recommendations for improvement
  • Designing and building data pipelines to automate data movement and processing
  • Apply advanced analytical techniques to large datasets to uncover trends and correlations, develop validated logical data models, and translate insights into actionable business recommendations that drive operational and process improvements, leveraging machine learning/AI
  • Through data-driven analysis, translate analytical findings into actionable business recommendations, identifying opportunities for operational and process improvements
  • Design and create interactive dashboards and visual reports using applicable tools and automate reporting processes for regular and ad-hoc stakeholder needs
  • To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions
  • Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes
  • If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others
  • OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes
  • Consult on complex issues
What we offer
What we offer
  • Hybrid working
  • We’re committed to providing a supportive and inclusive culture and environment for you to work in
  • We celebrate the unique perspectives and experiences each individual brings, believing our differences make us stronger and drive success
  • Fulltime
Read More
Arrow Right

Software Engineer (Data Engineering)

We are seeking a Software Engineer (Data Engineering) who can seamlessly integra...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
nstarxinc.com Logo
NStarX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years in Data Engineering and AI/ML roles
  • Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field
  • Python, SQL, Bash, PySpark, Spark SQL, boto3, pandas
  • Apache Spark on EMR (driver/executor model, sizing, dynamic allocation)
  • Amazon S3 (Parquet) with lifecycle management to Glacier
  • AWS Glue Catalog and Crawlers
  • AWS Step Functions, AWS Lambda, Amazon EventBridge
  • CloudWatch Logs and Metrics, Kinesis Data Firehose (or Kafka/MSK)
  • Amazon Redshift and Redshift Spectrum
  • IAM (least privilege), Secrets Manager, SSM
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL and ELT pipelines for large-scale data processing
  • Develop and optimize data architectures supporting analytics and ML workflows
  • Ensure data integrity, security, and compliance with organizational and industry standards
  • Collaborate with DevOps teams to deploy and monitor data pipelines in production environments
  • Build predictive and prescriptive models leveraging AI and ML techniques
  • Develop and deploy machine learning and deep learning models using TensorFlow, PyTorch, or Scikit-learn
  • Perform feature engineering, statistical analysis, and data preprocessing
  • Continuously monitor and optimize models for accuracy and scalability
  • Integrate AI-driven insights into business processes and strategies
  • Serve as the technical liaison between NStarX and client teams
What we offer
What we offer
  • Competitive salary and performance-based incentives
  • Opportunity to work on cutting-edge AI and ML projects
  • Exposure to global clients and international project delivery
  • Continuous learning and professional development opportunities
  • Competitive base + commission
  • Fast growth into leadership roles
  • Fulltime
Read More
Arrow Right