CrawlJobs Logo

Cost Allocation Data Engineer

inabia.com Logo

Inabia Solutions & Consulting

Location Icon

Location:
United States , Remote

Category Icon

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

55.00 USD / Hour

Job Description:

This is not a standard data engineering role. As a Cost Allocation Data Engineer, you will sit at the intersection of cloud economics, financial modeling, and data engineering, building the core allocation logic that enables show back and chargeback across complex cloud environments. Your work will directly influence how business leaders understand, govern, and optimize cloud spend. This is the most business-logic-intensive engineering role on the team, ideal for someone who thrives on translating financial concepts into scalable, auditable data models.

Job Responsibility:

  • Design and implement a cost allocation rule engine supporting: Tag-based attribution, Account/subscription-based mappings, Custom business-defined allocation rules
  • Build amortization models for upfront cloud commitments: 1-year and 3-year reservations, Prepaid and committed-use discounts
  • Implement shared cost distribution models, including: Proportional allocation, Even split, Fixed-coefficient weighting
  • Create attribution logic for untagged costs, leveraging: Account ownership, Usage heuristics, Business metadata
  • Develop budget vs. actual variance models to support cost governance
  • Design forecasting input models using historical trends and seasonality
  • Ensure all models align with accounting principles, including: Period alignment, Matching principle, Accrual and amortization logic

Requirements:

  • 5–7 years of hands-on data engineering experience
  • Proven experience building chargeback and/or showback systems
  • Strong exposure to cloud cost allocation in enterprise environments
  • Expert-level SQL (complex joins, window functions, performance tuning)
  • Strong dimensional data modeling skills (facts, dimensions, slowly changing dimensions)
  • Experience with modern data modeling tools such as: dbt, LookML, or equivalent frameworks
  • Deep understanding of FinOps cost allocation methodologies
  • Solid grasp of financial accounting concepts, including: Amortization, Accruals, Budgeting and variance analysis
  • Work Authorization: U.S. Citizen or Green Card only (no sponsorship available)

Nice to have:

  • Prior work supporting FinOps, Finance, or Cost Management teams is highly preferred
  • You think in business rules, not just pipelines
  • You can explain financial models clearly to both engineers and finance stakeholders
  • You build data models that are: Auditable, Explainable, Scalable
  • You’ve worked in environments where cost transparency directly impacts executive decisions
What we offer:
  • Work on high-visibility, mission-critical financial data models
  • Fully remote U.S.-based role
  • Collaborate with senior FinOps, Finance, and Engineering leaders
  • Opportunity to shape the foundation of enterprise cloud cost governance

Additional Information:

Job Posted:
February 05, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Cost Allocation Data Engineer

Software Engineer (Data Engineering)

We are seeking a Software Engineer (Data Engineering) who can seamlessly integra...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
nstarxinc.com Logo
NStarX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years in Data Engineering and AI/ML roles
  • Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field
  • Python, SQL, Bash, PySpark, Spark SQL, boto3, pandas
  • Apache Spark on EMR (driver/executor model, sizing, dynamic allocation)
  • Amazon S3 (Parquet) with lifecycle management to Glacier
  • AWS Glue Catalog and Crawlers
  • AWS Step Functions, AWS Lambda, Amazon EventBridge
  • CloudWatch Logs and Metrics, Kinesis Data Firehose (or Kafka/MSK)
  • Amazon Redshift and Redshift Spectrum
  • IAM (least privilege), Secrets Manager, SSM
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL and ELT pipelines for large-scale data processing
  • Develop and optimize data architectures supporting analytics and ML workflows
  • Ensure data integrity, security, and compliance with organizational and industry standards
  • Collaborate with DevOps teams to deploy and monitor data pipelines in production environments
  • Build predictive and prescriptive models leveraging AI and ML techniques
  • Develop and deploy machine learning and deep learning models using TensorFlow, PyTorch, or Scikit-learn
  • Perform feature engineering, statistical analysis, and data preprocessing
  • Continuously monitor and optimize models for accuracy and scalability
  • Integrate AI-driven insights into business processes and strategies
  • Serve as the technical liaison between NStarX and client teams
What we offer
What we offer
  • Competitive salary and performance-based incentives
  • Opportunity to work on cutting-edge AI and ML projects
  • Exposure to global clients and international project delivery
  • Continuous learning and professional development opportunities
  • Competitive base + commission
  • Fast growth into leadership roles
  • Fulltime
Read More
Arrow Right

Data Engineer, Solutions Architecture

We are seeking a talented Data Engineer to design, build, and maintain our data ...
Location
Location
United States , Scottsdale
Salary
Salary:
90000.00 - 120000.00 USD / Year
clearwayenergy.com Logo
Clearway Energy
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2-4 years of hands-on data engineering experience in production environments
  • Bachelor's degree in Computer Science, Engineering, or a related field
  • Proficiency in Dagster or Airflow for pipeline scheduling, dependency management, and workflow automation
  • Advanced-level Snowflake administration, including virtual warehouses, clustering, security, and cost optimization
  • Proficiency in dbt for data modeling, testing, documentation, and version control of analytical transformations
  • Strong Python and SQL skills for data processing and automation
  • 1-2+ years of experience with continuous integration and continuous deployment practices and tools (Git, GitHub Actions, GitLab CI, or similar)
  • Advanced SQL skills, database design principles, and experience with multiple database platforms
  • Proficiency in AWS/Azure/GCP data services, storage solutions (S3, Azure Blob, GCS), and infrastructure as code
  • Experience with APIs, streaming platforms (Kafka, Kinesis), and various data connectors and formats
Job Responsibility
Job Responsibility
  • Design, deploy, and maintain scalable data infrastructure to support enterprise analytics and reporting needs
  • Manage Snowflake instances, including performance tuning, security configuration, and capacity planning for growing data volumes
  • Optimize query performance and resource utilization to control costs and improve processing speed
  • Build and orchestrate complex ETL/ELT workflows using Dagster to ensure reliable, automated data processing for asset management and energy trading
  • Develop robust data pipelines that handle high-volume, time-sensitive energy market data and asset generation and performance metrics
  • Implement workflow automation and dependency management for critical business operations
  • Develop and maintain dbt models to transform raw data into business-ready analytical datasets and dimensional models
  • Create efficient SQL-based transformations for complex energy market calculations and asset performance metrics
  • Support advanced analytics initiatives through proper data preparation and feature engineering
  • Implement comprehensive data validation, testing, and monitoring frameworks to ensure accuracy and consistency across all energy and financial data assets
What we offer
What we offer
  • generous PTO
  • medical, dental & vision care
  • HSAs with company contributions
  • health FSAs
  • dependent daycare FSAs
  • commuter benefits
  • relocation
  • a 401(k) plan with employer match
  • a variety of life & accident insurances
  • fertility programs
  • Fulltime
Read More
Arrow Right

Senior Analytics Engineer

Join Qargo as a Senior Analytics Engineer and turn complex logistics data into c...
Location
Location
Belgium , Ghent
Salary
Salary:
Not provided
qargo.com Logo
Qargo
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Extensive experience in data analytics, BI engineering, or analytics engineering in a SaaS or data-driven environment
  • Strong proficiency in SQL, data modelling, and dashboarding tools (Lightdash, Superset, Tableau, Looker, or PowerBI)
  • Experience with analytics platforms such as Mixpanel and user behavior tracking methodologies is a plus
  • Proven ability to collaborate effectively with multidisciplinary teams (product, engineering, sales, finance)
  • Strong analytical and problem-solving skills, with an eye for scalability and data quality
  • A background in Computer Science, Data Engineering, Statistics, or a related field
  • A proactive mindset with the ability to take ownership of complex problems and guide them to completion
Job Responsibility
Job Responsibility
  • Own and evolve Qargo’s data architecture, ensuring scalable, reliable and high-quality data pipelines
  • Maintain, extend and optimise our in-app Lightdash dashboards, including operational, financial, and performance insights
  • Build and refine internal BI dashboards to support data-driven decision-making across departments
  • Develop account management dashboards that reveal feature adoption, revenue insights, and upsell opportunities
  • Create product and engineering analytics, including feature usage dashboards using Mixpanel or custom-built tracking solutions
  • Lead integration-related reporting, providing visibility on integration health, tenant usage, and performance
  • Build billing and cost-analysis dashboards, including API call cost allocation and tenant-level breakdowns
  • Mentor team members, set best practices, and raise the bar for analytics engineering within Qargo
What we offer
What we offer
  • Real impact and ownership in a growing international scale-up
  • A supportive and collaborative team culture
  • Hybrid working setup with flexibility and trust
  • Opportunities to learn, grow, and expand your technical knowledge
  • Competitive salary and benefits package
Read More
Arrow Right

Data Architect/Databricks Consultant

We are seeking a specialized Databricks Architect with deep expertise in cost op...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
genzeon.com Logo
Genzeon
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of experience in big data architecture with focus on cost optimization
  • 5+ years of hands-on Databricks experience with proven cost reduction achievements
  • Demonstrated experience architecting and executing complete platform migrations from Databricks to alternative solutions with successful outcomes
  • 6+ years of advanced Apache Spark development and cluster management experience
  • Track record of achieving significant cost savings (minimum 40%+) in cloud data platforms
  • Expert knowledge of Databricks pricing models, compute types, and cost drivers
  • Experience with FinOps practices and cloud cost management tools
  • Proven ability to implement automated cost controls and budget management systems
  • Knowledge of alternative platforms and their cost structures (EMR, HDInsight, GCP Dataproc, etc.)
  • Deep expertise in migrating complex data workloads between different Spark platforms
Job Responsibility
Job Responsibility
  • Conduct comprehensive cost analysis and auditing of existing Databricks deployments across multiple workspaces
  • Develop and implement aggressive cost reduction strategies targeting 30-50% savings through cluster optimization
  • Design and deploy automated cost monitoring solutions with real-time alerts and budget controls
  • Optimize cluster configurations, auto-scaling policies, and job scheduling to minimize compute costs
  • Implement spot instance strategies and preemptible VM usage for non-critical workloads
  • Establish cost allocation frameworks and implement chargeback mechanisms for business unit accountability
  • Create cost governance policies and developer guidelines to prevent cost overruns
  • Analyze and optimize storage costs including Delta Lake table optimization and data lifecycle management
  • Lead strategic initiatives to migrate workloads away from Databricks to cost-effective alternatives
  • Assess existing Databricks implementations and create detailed migration roadmaps to target platforms
Read More
Arrow Right

AWS Data Engineering Manager

The AWS Data Engineering Manager plays a mission-critical role in enabling BT’s ...
Location
Location
United Kingdom , London
Salary
Salary:
Not provided
plus.net Logo
Plusnet
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Leadership in data engineering and Agile delivery
  • Advanced knowledge of AWS data services (e.g. S3, Glue, EMR, Lambda, Redshift)
  • Expertise in big data technologies and distributed systems
  • Strong coding and optimisation skills (e.g. Python, Spark, SQL)
  • Data quality management and observability
  • Strategic thinking and solution architecture
  • Stakeholder and vendor management
  • Continuous improvement and innovation mindset
  • Excellent communication and mentoring abilities
  • Proven experience managing data engineering teams in cloud-native environments
Job Responsibility
Job Responsibility
  • Team Leadership & Coaching: Lead and mentor a team of data engineers, guiding them through complex, open-ended projects and fostering a high-performance, collaborative culture
  • Technical Direction & Strategy: Shape the technical vision of the data engineering function, contributing deep expertise across big data, systems design, machine learning, and cloud infrastructure
  • Data Infrastructure & Optimisation: Oversee the development and maintenance of accurate, high-quality datasets and optimised codebases that support data products, pipelines, and scalable architectures
  • Agile Delivery & Best Practices: Ensure teams follow Agile methodologies and engineering best practices to consistently deliver high-quality, production-ready solutions
  • Data Quality & Visibility: Coordinate the use of internal and external data sources to define and monitor key indicators of data quality, pipeline health, and infrastructure performance
  • Resource Management: Allocate engineering resources effectively to address priority issues, ensuring timely responses and measurable outcomes
  • Solution Design & Delivery : Translate business objectives into scalable, end-to-end data solutions that meet customer needs and align with strategic timelines
  • Vendor & Partner Collaboration: Manage relationships with outsourced partners and suppliers, setting clear expectations around deliverables, quality, timelines, and cost
  • Innovation & Knowledge Sharing: Champion emerging trends in data engineering, continuously developing and sharing knowledge to drive innovation and technical excellence
  • Talent Development: Coach and develop team members through upskilling, performance management, and recruitment to build future-ready capabilities
  • Fulltime
Read More
Arrow Right

Data Engineer, Product Analytics

As a Data Engineer at Meta, you will shape the future of people-facing and busin...
Location
Location
United States , Sunnyvale
Salary
Salary:
147000.00 - 208000.00 USD / Year
meta.com Logo
Meta
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
  • 4+ years of experience where the primary responsibility involves working with data (e.g., data analyst, data scientist, data engineer)
  • 4+ years of experience with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala, etc.)
Job Responsibility
Job Responsibility
  • Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems
  • Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve
  • Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights in a meaningful way
  • Define and manage Service Level Agreements for all data sets in allocated areas of ownership
  • Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership
  • Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains
  • Solve our most challenging data integration problems, utilizing optimal Extract, Transform, Load (ETL) patterns, frameworks, query techniques, sourcing from structured and unstructured data sources
  • Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts
  • Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts
  • Influence product and cross-functional teams to identify data opportunities to drive impact
What we offer
What we offer
  • bonus
  • equity
  • benefits
Read More
Arrow Right

Data Engineer, Product Analytics

As a Data Engineer at Meta, you will shape the future of people-facing and busin...
Location
Location
Israel , Tel Aviv
Salary
Salary:
Not provided
meta.com Logo
Meta
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent
  • 4+ years of experience where the primary responsibility involves working with data (e.g., data analyst, data scientist, data engineer)
  • 4+ years of experience (or a minimum of 2+ years with a Ph.D) with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala, etc.)
Job Responsibility
Job Responsibility
  • Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems
  • Create and contribute to frameworks that improve the efficacy of logging data, while working with data infrastructure to triage issues and resolve
  • Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights in a meaningful way
  • Define and manage Service Level Agreements for all data sets in allocated areas of ownership
  • Determine and implement the security model based on privacy requirements, confirm safeguards are followed, address data quality issues, and evolve governance processes within allocated areas of ownership
  • Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains
  • Solve our most challenging data integration problems, utilizing optimal Extract, Transform, Load (ETL) patterns, frameworks, query techniques, sourcing from structured and unstructured data sources
  • Assist in owning existing processes running in production, optimizing complex code through advanced algorithmic concepts
  • Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts
  • Influence product and cross-functional teams to identify data opportunities to drive impact
Read More
Arrow Right

Data Engineering Lead

Job description
Location
Location
United States , Mechanicsville
Salary
Salary:
Not provided
yottatechports.com Logo
Yotta Tech Ports
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Engineering, or a related field
  • advanced degree preferred
  • 10+ years of experience in data engineering, with a proven track record of leadership and technical expertise in managing complex data projects
  • Proficiency in programming languages such as Python, Java, or Scala, as well as expertise in SQL and relational databases (e.g., PostgreSQL, MySQL)
  • Strong understanding of distributed computing, cloud technologies (e.g., AWS), and big data frameworks (e.g., Hadoop, Spark)
  • Experience with data architecture design, data modeling, and optimization techniques
  • Excellent communication, collaboration, and leadership skills, with the ability to effectively manage remote teams and engage with onshore stakeholders
  • Proven ability to adapt to evolving project requirements and effectively prioritize tasks in a fast-paced environment
Job Responsibility
Job Responsibility
  • Lead and manage an offshore team of data engineers, providing strategic guidance, mentorship, and support to ensure the successful delivery of projects and the development of team members
  • Collaborate closely with onshore stakeholders to understand project requirements, allocate resources efficiently, and ensure alignment with client expectations and project timelines
  • Drive the technical design, implementation, and optimization of data pipelines, ETL processes, and data warehouses, ensuring scalability, performance, and reliability
  • Define and enforce engineering best practices, coding standards, and data quality standards to maintain high-quality deliverables and mitigate project risks
  • Stay abreast of emerging technologies and industry trends in data engineering, and provide recommendations for tooling, process improvements, and skill development
  • Assume a data architect role as needed, leading the design and implementation of data architecture solutions, data modeling, and optimization strategies
  • Demonstrate proficiency in AWS services such as: Expertise in cloud data services, including AWS services like Amazon Redshift, Amazon EMR, and AWS Glue, to design and implement scalable data solutions
  • Experience with cloud infrastructure services such as AWS EC2, AWS S3, to optimize data processing and storage
  • Knowledge of cloud security best practices, IAM roles, and encryption mechanisms to ensure data privacy and compliance
  • Proficiency in managing or implementing cloud data warehouse solutions, including data modeling, schema design, performance tuning, and optimization techniques
  • Fulltime
Read More
Arrow Right