CrawlJobs Logo

Data Engineer, Financial Systems

replit.com Logo

Replit

Location Icon

Location:
United States , Foster City

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

175000.00 - 335000.00 USD / Year

Job Description:

Replit is redefining how software is built, and who gets to build it. Our mission is to achieve Autonomy for All: making programming accessible, collaborative, and powered by AI. To realize this vision, we are building a brand that is as iconic, inventive, and human as the product itself. We need a Financial Systems Data Engineer to build the data infrastructure that ensures we can accurately recognize revenue, reconcile payments, pass audits, and make sound financial decisions as we scale. This Data team role works cross-functionally with Finance, Accounting, and Sales to build the systems they depend on. You'll own the data models and pipelines that power financial reporting, billing reconciliation, and revenue operations. This is a building role, we need someone who can own the financial data domain end-to-end.

Job Responsibility:

  • Build unified data models for payments and subscriptions across all revenue streams (Stripe, App Store, Play Store) to enable accurate MRR analysis, revenue recognition, and financial reporting
  • Design automated reconciliation pipelines and audit trails that surface payment discrepancies and data quality issues before they impact month-end close or financial audits
  • Create foundational billing infrastructure for AI agent usage and consumption products, managing the end-to-end flow from metering to revenue recognition while partnering with Engineering, Finance, and Sales Operations

Requirements:

  • 7+ years in data engineering with significant exposure to financial and payments data
  • Deep experience with Stripe or similar billing data models—you understand the relationship between subscriptions, invoices, payment intents, charges, and balance transactions
  • Strong understanding of SaaS financial metrics (ARR, MRR, churn, expansion) and how they're calculated from raw payment data
  • Experience building reconciliation pipelines and automated discrepancy detection
  • Expert-level SQL and dbt proficiency—you'll own financial data models end-to-end
  • Track record working directly with Accounting and Finance teams on data accuracy requirements
  • Experience operating in ambiguous environments where you define the requirements, not just execute them

Nice to have:

  • Understanding of revenue recognition principles (ASC 606 basics)
  • Experience with mobile app store financial reporting (App Store Connect, Google Play Console)
  • Familiarity with usage-based and consumption billing models
  • Experience preparing data infrastructure for audits (SOC 2, financial audits)
  • Exposure to billing/subscription, tax and payments management systems (Orb, Chargebee, Recurly, or similar)
  • You understand that "close enough" isn't acceptable for financial data—pennies matter
  • You've debugged payment discrepancies and know the satisfaction of finding the root cause
  • You've built data infrastructure from scratch, not just maintained existing systems
  • You're comfortable making decisions with incomplete information and adjusting as you learn
  • You treat data quality issues as engineering problems to solve, not business problems to escalate
  • You've identified and solved problems no one asked you to solve
What we offer:
  • Competitive Salary & Equity
  • 401(k) Program with a 4% match
  • Health, Dental, Vision and Life Insurance
  • Short Term and Long Term Disability
  • Paid Parental, Medical, Caregiver Leave
  • Commuter Benefits
  • Monthly Wellness Stipend
  • Autonomous Work Environment
  • In Office Set-Up Reimbursement
  • Flexible Time Off (FTO) + Holidays
  • Quarterly Team Gatherings
  • In Office Amenities

Additional Information:

Job Posted:
February 18, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Engineer, Financial Systems

Data Engineering & Analytics Lead

Premium Health is seeking a highly skilled, hands-on Data Engineering & Analytic...
Location
Location
United States , Brooklyn
Salary
Salary:
Not provided
premiumhealth.org Logo
Premium Health
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree preferred
  • Proven track record and progressively responsible experience in data engineering, data architecture, or related technical roles
  • healthcare experience preferred
  • Strong knowledge of data engineering principles, data integration, ETL processes, and semantic mapping techniques and best practices
  • Experience implementing data quality management processes, data governance frameworks, cataloging, and master data management concepts
  • Familiarity with healthcare data standards (e.g., HL7, FHIR, etc), health information management principles, and regulatory requirements (e.g., HIPAA)
  • Understanding of healthcare data, including clinical, operational, and financial data models, preferred
  • Advanced proficiency in SQL, data modeling, database design, optimization, and performance tuning
  • Experience designing and integrating data from disparate systems into harmonized data models or semantic layers
  • Hands-on experience with modern cloud-based data platforms (e.g Azure, AWS, GCP)
Job Responsibility
Job Responsibility
  • Collaborate with the CDIO and Director of Technology to define a clear data vision aligned with the organization's goals and execute the enterprise data roadmap
  • Serve as a thought leader for data engineering and analytics, guiding the evolution of our data ecosystem and championing data-driven decision-making across the organization
  • Build and mentor a small data team, providing technical direction and performance feedback, fostering best practices and continuous learning, while remaining a hands-on implementor
  • Define and implement best practices, standards, and processes for data engineering, analytics, and data management across the organization
  • Design, implement, and maintain a scalable, reliable, and high-performing modern data infrastructure, aligned with the organizational needs and industry best practices
  • Architect and maintain data lake/lakehouse, warehouse, and related platform components to support analytics, reporting, and operational use cases
  • Establish and enforce data architecture standards, governance models, naming conventions ,and documentation
  • Develop, optimize, and maintain scalable ETL/ELT pipelines and data workflows to collect, transform, normalize, and integrate data from diverse systems
  • Implement robust data quality processes, validation, monitoring, and error-handling frameworks
  • Ensure data is accurate, timely, secure, and ready for self-service analytics and downstream applications
What we offer
What we offer
  • Paid Time Off, Medical, Dental and Vision plans, Retirement plans
  • Public Service Loan Forgiveness (PSLF)
  • Fulltime
Read More
Arrow Right

Senior Software Engineer

We are seeking a highly skilled senior software engineer to join our team. This ...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
rearc.io Logo
Rearc
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Deep expertise in Java: Proven proficiency in designing, developing, and optimizing high-performance, multithreaded Java applications
  • Comprehensive SDLC Experience: Extensive experience with the entire software development lifecycle, including requirements gathering, architectural design, coding, testing (unit, integration, performance), deployment, and maintenance
  • Data Engineering & Financial Data Processing: Proven experience in data engineering, including building, maintaining, and optimizing complex data pipelines for real-time and historical financial stock market data
  • Financial Market Acumen: A strong background in the financial industry, with a working knowledge of financial instruments, market data (e.g., tick data, OHLC), and common financial models
  • Problem-Solving & Adaptability: Excellent problem-solving skills and the ability to work with complex and evolving requirements
  • Collaboration & Communication: Superior communication skills, capable of collaborating effectively with quantitative analysts, data scientists, and business stakeholders
  • Testing & CI/CD: A strong ability to work on development and all forms of testing, with working knowledge of CI/CD pipelines and deployments
  • Database Proficiency: Experience with various database technologies (SQL and NoSQL) and the ability to design database schemas for efficient storage and retrieval of financial data
Job Responsibility
Job Responsibility
  • Design and Development: Architect, build, and maintain robust, scalable, and low-latency Java applications for processing real-time and historical financial stock market data
  • Data Pipeline Engineering: Engineer and manage sophisticated data pipelines using modern data technologies to ensure timely and accurate data availability for analytics and trading systems
  • Performance Optimization: Profile and optimize applications for maximum speed, scalability, and efficiency
  • System Integration: Integrate data from various financial market sources and ensure seamless data flow into downstream systems
  • Mentorship and Best Practices: Provide guidance and mentorship to other engineers, contribute to code reviews, and advocate for best practices
  • Operational Excellence: Participate in the full software development lifecycle, from initial design to production support, ensuring system reliability and performance
Read More
Arrow Right

Data Engineer

This is a data engineer position - a programmer responsible for the design, deve...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5-8 years of experience in working in data eco systems
  • 4-5 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
  • 3+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing 'big data' data pipelines, architectures, and datasets
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
Job Responsibility
Job Responsibility
  • Ensuring high quality software development, with complete documentation and traceability
  • Develop and optimize scalable Spark Java-based data pipelines for processing and analyzing large scale financial data
  • Design and implement distributed computing solutions for risk modeling, pricing and regulatory compliance
  • Ensure efficient data storage and retrieval using Big Data
  • Implement best practices for spark performance tuning including partition, caching and memory management
  • Maintain high code quality through testing, CI/CD pipelines and version control (Git, Jenkins)
  • Work on batch processing frameworks for Market risk analytics
  • Promoting unit/functional testing and code inspection processes
  • Work with business stakeholders and Business Analysts to understand the requirements
  • Work with other data scientists to understand and interpret complex datasets
  • Fulltime
Read More
Arrow Right

Python Data Engineer

The FX Data Analytics & AI Technology team, within Citi's FX Technology organiza...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8 to 12 Years experience
  • Master’s degree or above (or equivalent education) in a quantitative discipline
  • Proven experience in software engineering and development, and a strong understanding of computer systems and how they operate
  • Excellent Python programming skills, including experience with relevant analytical and machine learning libraries (e.g., pandas, polars, numpy, sklearn, TensorFlow/Keras, PyTorch, etc.), in addition to visualization and API libraries (matplotlib, plotly, streamlit, Flask, etc)
  • Experience developing and implementing Gen AI applications from data in a financial context
  • Proficiency working with version control systems such as Git, and familiarity with Linux computing environments
  • Experience working with different database and messaging technologies such as SQL, KDB, MongoDB, Kafka, etc
  • Familiarity with data visualization and ideally development of analytical dashboards using Python and BI tools
  • Excellent communication skills, both written and verbal, with the ability to convey complex information clearly and concisely to technical and non-technical audiences
  • Ideally, some experience working with CI/CD pipelines and containerization technologies like Docker and Kubernetes
Job Responsibility
Job Responsibility
  • Design, develop and implement quantitative models to derive insights from large and complex FX datasets, with a focus on understanding market trends and client behavior, identifying revenue opportunities, and optimizing the FX business
  • Engineer data and analytics pipelines using modern, cloud-native technologies and CI/CD workflows, focusing on consolidation, automation, and scalability
  • Collaborate with stakeholders across sales and trading to understand data needs, translate them into impactful data-driven solutions, and deliver these in partnership with technology
  • Develop and integrate functionality to ensure adherence with best-practices in terms of data management, need-to-know (NTK), and data governance
  • Contribute to shaping and executing the overall data strategy for FX in collaboration with the existing team and senior stakeholders
  • Fulltime
Read More
Arrow Right

Data Engineer, Solutions Architecture

We are seeking a talented Data Engineer to design, build, and maintain our data ...
Location
Location
United States , Scottsdale
Salary
Salary:
90000.00 - 120000.00 USD / Year
clearwayenergy.com Logo
Clearway Energy
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2-4 years of hands-on data engineering experience in production environments
  • Bachelor's degree in Computer Science, Engineering, or a related field
  • Proficiency in Dagster or Airflow for pipeline scheduling, dependency management, and workflow automation
  • Advanced-level Snowflake administration, including virtual warehouses, clustering, security, and cost optimization
  • Proficiency in dbt for data modeling, testing, documentation, and version control of analytical transformations
  • Strong Python and SQL skills for data processing and automation
  • 1-2+ years of experience with continuous integration and continuous deployment practices and tools (Git, GitHub Actions, GitLab CI, or similar)
  • Advanced SQL skills, database design principles, and experience with multiple database platforms
  • Proficiency in AWS/Azure/GCP data services, storage solutions (S3, Azure Blob, GCS), and infrastructure as code
  • Experience with APIs, streaming platforms (Kafka, Kinesis), and various data connectors and formats
Job Responsibility
Job Responsibility
  • Design, deploy, and maintain scalable data infrastructure to support enterprise analytics and reporting needs
  • Manage Snowflake instances, including performance tuning, security configuration, and capacity planning for growing data volumes
  • Optimize query performance and resource utilization to control costs and improve processing speed
  • Build and orchestrate complex ETL/ELT workflows using Dagster to ensure reliable, automated data processing for asset management and energy trading
  • Develop robust data pipelines that handle high-volume, time-sensitive energy market data and asset generation and performance metrics
  • Implement workflow automation and dependency management for critical business operations
  • Develop and maintain dbt models to transform raw data into business-ready analytical datasets and dimensional models
  • Create efficient SQL-based transformations for complex energy market calculations and asset performance metrics
  • Support advanced analytics initiatives through proper data preparation and feature engineering
  • Implement comprehensive data validation, testing, and monitoring frameworks to ensure accuracy and consistency across all energy and financial data assets
What we offer
What we offer
  • generous PTO
  • medical, dental & vision care
  • HSAs with company contributions
  • health FSAs
  • dependent daycare FSAs
  • commuter benefits
  • relocation
  • a 401(k) plan with employer match
  • a variety of life & accident insurances
  • fertility programs
  • Fulltime
Read More
Arrow Right

Software Engineer - Market Data

We are looking for a skilled and experienced Software Engineer to join our team,...
Location
Location
United States , New York
Salary
Salary:
200000.00 - 250000.00 USD / Year
clearstreet.io Logo
Clear Street
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 8+ years of professional experience implementing low-latency, high-throughput data pipelines
  • Solid understanding of distributed systems and the challenges involved in real-time data pipelines (e.g., data consistency, fault tolerance, scalability)
  • Familiarity with financial market data, including security prices, and asset classes like equities, options, futures, etc...
  • Strong familiarity with Linux/BSD
  • Familiarity with TCP/IP and UDP (Unicast/Multicast) networking
  • You communicate technical ideas with ease and always look to collaborate to deliver high quality products
  • You are a team player, with experience working effectively with other engineers toward common goals
Job Responsibility
Job Responsibility
  • Design, develop, and maintain real-time data pipelines to handle financial market data with low latency and high throughput in a resilient manner
  • Work with various asset classes such as equities, options, futures, and other financial instruments to ensure timely and accurate data processing
  • Collaborate with product, trading, and risk teams to understand requirements and deliver high-quality solutions that meet business needs
  • Develop efficient mechanisms for integrating market data feeds from exchanges and other sources into our systems
  • Troubleshoot and resolve performance issues, data discrepancies, and ensure data integrity across the pipeline
  • Continuously monitor the performance and health of data pipelines, identifying and mitigating potential issues before they impact system performance
What we offer
What we offer
  • Competitive compensation packages
  • Company equity
  • 401k matching
  • Gender neutral parental leave
  • Full medical, dental and vision insurance
  • Lunch stipends
  • Fully stocked kitchens
  • Happy hours
  • A great location
  • Amazing views
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Come work on fantastically high-scale systems with us! Blis is an award-winning,...
Location
Location
United Kingdom , Edinburgh
Salary
Salary:
Not provided
blis.com Logo
Blis
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years direct experience delivering robust performant data pipelines within the constraints of direct SLA’s and commercial financial footprints
  • Proven experience in architecting, developing, and maintaining Apache Druid and Imply platforms, with a focus on DevOps practices and large-scale system re-architecture
  • Mastery of building Pipelines in GCP maximising the use of native and native supporting technologies e.g. Apache Airflow
  • Mastery of Python for data and computational tasks with fluency in data cleansing, validation and composition techniques
  • Hands-on implementation and architectural familiarity with all forms of data sourcing i.e streaming data, relational and non-relational databases, and distributed processing technologies (e.g. Spark)
  • Fluency with all appropriate python libraries typical of data science e.g. pandas, scikit-learn, scipy, numpy, MLlib and/or other machine learning and statistical libraries
  • Advanced knowledge of cloud based services specifically GCP
  • Excellent working understanding of server-side Linux
  • Professional in managing and updating on tasks ensuring appropriate levels of documentation, testing and assurance around their solutions
Job Responsibility
Job Responsibility
  • Design, build, monitor, and support large scale data processing pipelines
  • Support, mentor, and pair with other members of the team to advance our team’s capabilities and capacity
  • Help Blis explore and exploit new data streams to innovative and support commercial and technical growth
  • Work closely with Product and be comfortable with taking, making and delivering against fast paced decisions to delight our customers
Read More
Arrow Right

Senior Data Engineer

Come work on fantastically high-scale systems with us! Blis is an award-winning,...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
blis.com Logo
Blis
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years direct experience delivering robust performant data pipelines within the constraints of direct SLA’s and commercial financial footprints
  • Proven experience in architecting, developing, and maintaining Apache Druid and Imply platforms, with a focus on DevOps practices and large-scale system re-architecture
  • Mastery of building Pipelines in GCP maximising the use of native and native supporting technologies e.g. Apache Airflow
  • Mastery of Python for data and computational tasks with fluency in data cleansing, validation and composition techniques
  • Hands-on implementation and architectural familiarity with all forms of data sourcing i.e streaming data, relational and non-relational databases, and distributed processing technologies (e.g. Spark)
  • Fluency with all appropriate python libraries typical of data science e.g. pandas, scikit-learn, scipy, numpy, MLlib and/or other machine learning and statistical libraries
  • Advanced knowledge of cloud based services specifically GCP
  • Excellent working understanding of server-side Linux
  • Professional in managing and updating on tasks ensuring appropriate levels of documentation, testing and assurance around their solutions
Job Responsibility
Job Responsibility
  • Design, build, monitor, and support large scale data processing pipelines
  • Support, mentor, and pair with other members of the team to advance our team’s capabilities and capacity
  • Help Blis explore and exploit new data streams to innovative and support commercial and technical growth
  • Work closely with Product and be comfortable with taking, making and delivering against fast paced decisions to delight our customers
  • Fulltime
Read More
Arrow Right