CrawlJobs Logo

Sr Data Modeler

O'Reilly Auto Parts

Location Icon

Location:
United States

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

108086.00 - 180144.00 USD / Year

Job Description:

The Sr Data Modeler is a key technical contributor responsible for designing, developing, and optimizing conceptual, logical, and physical data models across structured and semi-structured platforms including relational, NoSQL, and real-time systems. This role ensures data models are scalable, governed, and aligned with performance and business requirements. As a senior practitioner, the role partners closely with engineers, stakeholders, and product teams to translate domain-specific data needs into robust models for reporting, analytics, and AI use cases. The Senior Data Modeler also promotes modeling best practices, contributes to data governance efforts, and supports the implementation of hybrid table and streaming-aware data architectures.

Job Responsibility:

  • Design domain-level conceptual, logical, and physical data models across OLTP and OLAP systems
  • Apply best practices in relational modeling using tools such as Erwin, dbt, and UML
  • Implement multi-model data environments that span relational, NoSQL, graph, and event-based systems
  • Develop dimensional models, normalized schemas, and de-normalized views
  • Collaborate with platform and engineering teams to ensure models support schema evolution and efficient query performance
  • Translate business requirements and analytics use cases into well-structured data models
  • Recommend modeling techniques and platform selection
  • Work closely with engineers and product owners to ensure model designs support KPI alignment
  • Lead and implement modeling requirements for feature stores and analytic datasets
  • Maintain detailed documentation including entity definitions, data dictionaries, model lineage
  • Contribute to the enforcement of modeling standards
  • Support governance efforts through consistent metadata management
  • Execute schema governance processes
  • Develop performant physical data models for Snowflake, BigQuery, PostgreSQL
  • Collaborate with data engineers to implement optimal indexing, clustering, partitioning
  • Contribute in troubleshooting performance issues
  • Support continuous improvement of data models
  • Work with engineering teams to embed models into ingestion pipelines
  • Validate that dbt models, ETL/ELT logic, and CI/CD deployment scripts accurately reflect designs
  • Support integration of models with real-time systems
  • Participate in quality assurance cycles
  • Contribute to the development of reusable semantic models
  • Help unify metric definitions and business logic across systems
  • Contribute to graph and document modeling efforts
  • Embed structural validation, referential integrity checks, and schema verification
  • Collaborate with engineers and platform teams to ensure data health monitoring is modeled
  • Support automated testing and CI/CD integration of models
  • Participate in resolving modeling-related issues
  • Serve as a mentor and resource to junior data modelers and engineers
  • Contribute to modeling playbooks, reusable templates, and internal knowledge repositories
  • Participate in technical reviews and modeling community of practice discussions
  • Stay up to date with modern modeling techniques

Requirements:

  • Advanced experience designing logical and physical data models for OLTP, OLAP, and streaming systems
  • Strong experience in relational data modeling, including dimensional modeling (star/snowflake), data vault, and normalized structures using modeling tools such as Erwin or UML
  • Advanced competence in developing and managing data models across data platforms, such as Snowflake, BigQuery, PostgreSQL, and cloud SQL services
  • Experience with NoSQL and semi-structured data models (e.g., MongoDB, Cassandra)
  • Basic to intermediate experience with graph databases and modeling concepts (e.g., Neo4j)
  • Strong experience modeling for analytics and machine learning, including schema design for curated datasets, feature stores, and metric layers
  • Proficient in translating data contracts and business definitions into reusable semantic models
  • Experience incorporating streaming-aware modeling considerations
  • Advanced ability to work with product owners and business stakeholders
  • Strong understanding of enterprise business processes
  • Experience working in agile data product environments
  • Ability to anticipate business implications of schema changes
  • Experience leading data modeling efforts on cross-functional teams
  • Ability to mentor junior data modelers and analysts
  • Strong contributor to modeling playbooks
  • Experience aligning data models to enterprise taxonomies
  • Strong understanding of data modeling’s role in data governance
  • Advanced experience integrating semantic models and metrics stores
  • Ability to influence modeling direction
  • Education: Bachelor's Degree or Equivalent Level in Computer Science or related field
  • Experience: Experienced practitioner able to deal with the majority of situations and to advise others (3 to 6 years)
  • Managerial Experience: Basic experience coordinating the work of others (4 to 6 months)

Nice to have:

  • Experience modeling for hybrid workloads, supporting both transactional and analytical use cases
  • Working knowledge of streaming and event-based modeling patterns, including Kafka schema registry integration
  • Familiarity with open table formats such as Apache Iceberg, Delta Lake, or Hudi
  • Exposure to lineage and metadata integration tools such as Alation, Collibra
  • Exposure in enabling LLM-ready data assets
  • Demonstrated ability to support platform migrations or modeling refactoring efforts
What we offer:
  • Competitive Wages & Paid Time Off
  • Stock Purchase Plan & 401k with Employer Contributions Starting Day One
  • Medical, Dental, & Vision Insurance with Optional Flexible Spending Account (FSA)
  • Team Member Health/Wellbeing Programs
  • Tuition Educational Assistance Programs
  • Opportunities for Career Growth

Additional Information:

Job Posted:
March 10, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Sr Data Modeler

Sr Data Scientist

We are offering an exciting opportunity for a Sr Data Scientist based in Plano, ...
Location
Location
United States , Plano
Salary
Salary:
Not provided
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proficiency in Data Mining Techniques
  • Must possess a deep understanding of Budget Processes
  • Should be highly skilled in Quantitative Analysis
  • Experience in Financial Modeling is highly desirable
  • Knowledge of SQL is a must
  • Proficiency in SAS is required
  • Must be adept at using Excel
  • Experience with Access is necessary
Job Responsibility
Job Responsibility
  • Apply advanced statistical analytics to assess future risks, opportunities, and effectiveness, translating results into solutions enhancing decision-making
  • Manage analytics regarding loan performance, including delinquency and credit losses, identify key drivers, and develop forecast expectations
  • Conduct innovative research projects, encapsulating project design, data collection and analysis, summarization of findings, and presentation of results
  • Utilize your skills in SQL, SAS, Excel, and Access to facilitate data analysis and financial modeling
  • Apply your knowledge of consumer lending business to produce advanced analytical material for discussions with cross-functional teams to understand complex business objectives and influence solution strategies
  • Interact with stakeholders to understand their business questions, craft methodologies to mine/analyze datasets, and deliver insightful recommendations
  • Use data mining techniques to explore and examine data from multiple disparate sources with the goal of discovering patterns and previously hidden insights
  • Participate in budget processes, providing quantitatively-backed recommendations and insights
What we offer
What we offer
  • Access to top jobs
  • Competitive compensation and benefits
  • Free online training
  • Medical insurance
  • Vision insurance
  • Dental insurance
  • Life insurance
  • Disability insurance
  • 401(k) plan
  • Fulltime
Read More
Arrow Right

Sr. Data Engineer

We are looking for a Sr. Data Engineer to join our team.
Location
Location
Salary
Salary:
Not provided
bostondatapro.com Logo
Boston Data Pro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Data Engineering: 8 years (Preferred)
  • Data Programming languages: 5 years (Preferred)
  • Data Developers: 5 years (Preferred)
Job Responsibility
Job Responsibility
  • Designs and implements standardized data management procedures around data staging, data ingestion, data preparation, data provisioning, and data destruction
  • Ensures quality of technical solutions as data moves across multiple zones and environments
  • Provides insight into the changing data environment, data processing, data storage and utilization requirements for the company, and offer suggestions for solutions
  • Ensures managed analytic assets to support the company’s strategic goals by creating and verifying data acquisition requirements and strategy
  • Develops, constructs, tests, and maintains architectures
  • Aligns architecture with business requirements and use programming language and tools
  • Identifies ways to improve data reliability, efficiency, and quality
  • Conducts research for industry and business questions
  • Deploys sophisticated analytics programs, machine learning, and statistical methods to efficiently implement solutions
  • Prepares data for predictive and prescriptive modeling and find hidden patterns using data
Read More
Arrow Right

Sr. Data Engineer

We are looking for a Sr. Data Engineer to join our team.
Location
Location
Salary
Salary:
Not provided
bostondatapro.com Logo
Boston Data Pro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Data Engineering: 8 years (Preferred)
  • Data Programming languages: 5 years (Preferred)
  • Data Developers: 5 years (Preferred)
Job Responsibility
Job Responsibility
  • Designs and implements standardized data management procedures around data staging, data ingestion, data preparation, data provisioning, and data destruction
  • Ensures quality of technical solutions as data moves across multiple zones and environments
  • Provides insight into the changing data environment, data processing, data storage and utilization requirements for the company, and offer suggestions for solutions
  • Ensures managed analytic assets to support the company’s strategic goals by creating and verifying data acquisition requirements and strategy
  • Develops, constructs, tests, and maintains architectures
  • Aligns architecture with business requirements and use programming language and tools
  • Identifies ways to improve data reliability, efficiency, and quality
  • Conducts research for industry and business questions
  • Deploys sophisticated analytics programs, machine learning, and statistical methods to efficiently implement solutions
  • Prepares data for predictive and prescriptive modeling and find hidden patterns using data
Read More
Arrow Right

Sr. Data Engineer

We are looking for a Sr. Data Engineer to join our growing Quality Engineering t...
Location
Location
Salary
Salary:
Not provided
dataideology.com Logo
Data Ideology
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Information Systems, or a related field (or equivalent experience)
  • 5+ years of experience in data engineering, data warehousing, or data architecture
  • Expert-level experience with Snowflake, including data modeling, performance tuning, security, and migration from legacy platforms
  • Hands-on experience with Azure Data Factory (ADF) for building, orchestrating, and optimizing data pipelines
  • Strong experience with Informatica (PowerCenter and/or IICS) for ETL/ELT development, workflow management, and performance optimization
  • Deep knowledge of data modeling techniques (dimensional, tabular, and modern cloud-native patterns)
  • Proven ability to translate business requirements into scalable, high-performance data solutions
  • Experience designing and supporting end-to-end data pipelines across cloud and hybrid architectures
  • Strong proficiency in SQL and experience optimizing large-scale analytic workloads
  • Experience working within SDLC frameworks, CI/CD practices, and version control
Job Responsibility
Job Responsibility
  • Ability to collect and understand business requirements and translate those requirements into data models, integration strategies, and implementation plans
  • Lead modernization and migration initiatives to move clients from legacy systems into Snowflake, ensuring functionality, performance and data integrity
  • Ability to work within the SDLC framework in multiple environments and understand the complexities and dependencies of the data warehouse
  • Optimize and troubleshoot ETL/ELT workflows, applying best practices for scheduling, orchestration, and performance tuning
  • Maintain documentation, architecture diagrams, and migration plans to support knowledge transfer and project tracking
What we offer
What we offer
  • PTO Policy
  • Eligibility for Health Benefits
  • Retirement Plan
  • Work from Home
  • Fulltime
Read More
Arrow Right

Sr Data Engineer

Resource Informatics Group, Inc. is actively seeking a skilled Senior Data Engin...
Location
Location
United States , Irving
Salary
Salary:
Not provided
rigusinc.com Logo
Resource Informatics Group
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related fields
  • Strong expertise in data engineering and cloud-based solutions
  • 6+ years of experience in data engineering, architecture, and implementation of large-scale data solutions
  • Proficiency in designing and implementing data models, data structures, and algorithms
  • Advanced knowledge of SQL and NoSQL databases
  • Demonstrated expertise in optimizing data pipelines and improving data reliability, efficiency, and quality
  • Excellent problem-solving capabilities with a keen attention to detail
  • Strong communication and collaboration skills, with the ability to work effectively across diverse teams
  • Relevant certifications in cloud technologies (Azure, AWS, or GCP) advantageous
  • Master’s in Data Science or Computer Science or foreign equivalent, plus 6+ years of experience, OR Bachelor’s in Computer Science, Information Technology, or Electronics and Communication Engineering or foreign equivalent
Job Responsibility
Job Responsibility
  • Develop and execute ETL processes for data extraction, transformation, and loading into warehouses and data lakes
  • Architect data warehousing solutions using Azure Synapse Analytics for efficient querying and reporting
  • Optimize query performance, data processing speed, and resource utilization within Azure environments
  • Construct seamless data pipelines across Azure services utilizing Azure Data Factory, Databricks, and SQL Server Integration Services
  • Collaborate with stakeholders, including data scientists and analysts, to understand data requirements and deliver effective solutions
  • Manage large data volumes leveraging the Hadoop ecosystem for diverse source collection and loading
  • Design, maintain, and optimize data processing jobs using Hadoop MapReduce, Spark, and Hive, with coding in Java or Python for custom applications
  • Monitor job and cluster performance using tools like Ambari and custom monitoring scripts, scaling and maintaining Hadoop clusters and Azure data services
  • Ensure adherence to data security measures and governance standards
  • Integrate cross-cloud data with AWS and GCP services
  • Fulltime
Read More
Arrow Right

Sr Data Engineer

(Locals or Nearby resources only). You will work with technologies that include ...
Location
Location
United States , Glendale
Salary
Salary:
Not provided
enormousenterprise.com Logo
Enormous Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of data engineering experience developing large data pipelines
  • Proficiency in at least one major programming language (e.g. Python, Java, Scala)
  • Hands-on production environment experience with distributed processing systems such as Spark
  • Hands-on production experience with data pipeline orchestration systems such as Airflow for creating and maintaining data pipelines
  • Experience with at least one major Massively Parallel Processing (MPP) or cloud database technology (Snowflake, Databricks, Big Query)
  • Experience in developing APIs with GraphQL
  • Advance understanding of OLTP vs OLAP environments
  • Candidates must work W2, no Corp 2 Corp
  • US Citizen, Green Card Holder, H4-EAD, TN-Visa
  • Airflow
Job Responsibility
Job Responsibility
  • Contribute to maintaining, updating, and expanding existing Core Data platform data pipelines
  • Build and maintain APIs to expose data to downstream applications
  • Develop real-time streaming data pipelines
  • Collaborate with product managers, architects, and other engineers to drive the success of the Core Data platform
  • Contribute to developing and documenting both internal and external standards and best practices for pipeline configurations, naming conventions, and more
  • Ensure high operational efficiency and quality of the Core Data platform datasets to ensure our solutions meet SLAs and project reliability and accuracy to all our stakeholders (Engineering, Data Science, Operations, and Analytics teams)
What we offer
What we offer
  • 3 levels of medical insurance for you and your family
  • Dental insurance for you and your family
  • 401k
  • Overtime
  • Sick leave policy: accrue 1 hour for every 30 hours worked up to 48 hours
Read More
Arrow Right

Sr Staff Data Scientist

The mission of the Data Science team at Mist is to leverage state-of-art ML and ...
Location
Location
United States , San Jose
Salary
Salary:
148000.00 - 340500.00 USD / Year
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • MS or PhD in Computer Science, Electrical Engineering, Statistics, Applied Math, or a related quantitative discipline
  • Excellent understanding of machine learning techniques and algorithms, including clustering, anomaly detection, optimization, neural network etc
  • 3+ years experiences building data science-driven solutions including data collection, feature selection, model training, post-deployment validation
  • Proficient in coding, (preferably in Python) processing large-scale data sets and developing machine learning models
  • Able to communicate effectively in a variety of formats and collaborate with diverse teams
Job Responsibility
Job Responsibility
  • Research and develop statistical learning models for data analysis
  • Collaborate with product management and engineering departments to understand company needs and devise possible solutions
  • Keep up-to-date with latest technology trends
  • Communicate results and ideas to key decision makers
  • Implement new statistical or other mathematical methodologies as needed for specific models or analysis
  • Optimize joint development efforts through appropriate database use and project design
What we offer
What we offer
  • Comprehensive suite of benefits supporting physical, financial and emotional wellbeing
  • Personal and professional development programs
  • Inclusive workplace policies
  • Fulltime
Read More
Arrow Right

Sr Data Scientist

Sr Data Scientist
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
genzeon.com Logo
Genzeon
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of hands-on experience in developing and deploying Large Language Models, and Machine learning and working with Pytorch
  • A thorough understanding of machine learning, particularly deep learning techniques, including knowledge of neural network architectures, training methods, and optimization algorithms
  • Proficiency in AI technology, Python, including experience with NLP libraries (e.g., Hugging Face Transformers, NLTK, spaCy), text classification
  • Experience with frameworks: PyTorch, or Tensorflow
  • Experience with cloud services (AWS, Azure) and ML deployment tool Docker
  • Familiarity with model fine-tuning and optimization techniques for LLMs
  • Proven track record of innovative solutions in the field of LLMs
  • Strong communication skills, with the ability to explain complex AI concepts to non-expert audiences
Job Responsibility
Job Responsibility
  • LLM Architecture: Good understanding of the architecture underlying large language models, such as Transformer-based models and their variants. Design and implement deep learning model architectures using PyTorch
  • Language Model Training and Fine-Tuning: Experience in training large-scale language models from scratch, as well as fine-tuning pre-trained models on domain data
  • Data Preprocessing for NLP: Skilled in preprocessing textual data, including tokenization, stemming, lemmatization, and handling of different text encoding
  • Transfer Learning and Adaptation: Proficiency in applying transfer learning techniques to adapt existing LLMs to new languages, domains, or specific business needs
  • Data Annotation and Evaluation: Skills in designing and implementing data annotation strategies for training LLMs and evaluating their performance using appropriate metrics
  • Scalability and Deployment: Experience in scaling LLMs for production environments, ensuring efficiency and robustness in deployment
  • Model Training, Optimization, and Evaluation: Evaluate the performance of PyTorch models using appropriate metrics and techniques like cross-validation, holdout sets, or online evaluation. This encompasses the complete cycle of training, fine-tuning, and validating language models. You will be designing and adapting LLMs for use in virtual assistants, Information retrieval and extraction etc
  • Experimentation with Emerging Technologies and Methods: Actively exploring new technologies and methodologies in language model development, including experimental frameworks and software tools
  • LLM Alignment: Understanding of algorithms like DPO, PPO, KPO, RLHF and using it for guardrails
  • AI Data Retrieval: Data retrieval from unstructured data, extract key value pairs using techniques like donut, layoutLM, table transformers
Read More
Arrow Right