CrawlJobs Logo

Data Scientist, Financial Engineering

openai.com Logo

OpenAI

Location Icon

Location:
United States , San Francisco

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

230000.00 - 385000.00 USD / Year

Job Description:

OpenAI’s Financial Engineering (FinEng) team powers how revenue flows through our products—pricing & packaging, checkout, payments, subscriptions, and the financial infrastructure behind them. We partner with Product, Engineering, Risk, Finance, and Go-to-Market to make paying for OpenAI products seamless, reliable, and efficient worldwide. As a Data Scientist on FinEng, you’ll own the analytics and experimentation that improve our checkout and payments, subscriptions, and pricing & monetization systems. You’ll define the metrics that matter, build the source-of-truth data assets, and design experiments that increase conversion, reduce churn and payment failures, and expand global payment method coverage. Your work will directly influence revenue, customer experience, and how we scale internationally.

Job Responsibility:

  • Own checkout & payments analytics and experimentation across methods and locales (e.g., bank transfers, emerging rails), improving conversion while monitoring risk and latency
  • Build and run the experimentation program for in-house checkout—define success metrics and guardrails, execute staged rollouts, and use offline incrementality when online tests aren’t feasible
  • Create operational visibility and source-of-truth data with FinEng Data Engineering—land team-level metrics, SLAs, and self-serve dashboards that drive proactive action
  • Lead subscription, retention, and monetization analytics—ship launch-readiness for new subscription features, reduce involuntary churn (e.g., targeted retrials/nudges), and develop elasticity/FX frameworks toward pricing optimality

Requirements:

  • 5+ years in a quantitative role (data science, product analytics, or experimentation) in high-growth or fintech environments
  • Fluency in SQL and Python, with a track record designing and interpreting A/B tests and quasi-experiments
  • Experience building product metrics from scratch and operationalizing them for decision-making
  • Excellent communication skills with PMs, engineers, risk/finance partners, and executives
  • Strategic instincts beyond significance tests—clear thinking about tradeoffs (conversion vs. risk vs. cost vs. user experience)

Nice to have:

  • Payments, checkout, or subscription analytics experience (PSPs, bank rails, disputes/refunds, risk, e-commerce)
  • Background in offline incrementality methods, uplift modeling, CUPED/causal inference, or counterfactual evaluation
  • Experience with internationalization/local payments, FX, and pricing & packaging strategy
  • Comfort building operational analytics (alerting, SLIs/SLOs) and partnering closely with data engineering
What we offer:
  • Medical, dental, and vision insurance for you and your family, with employer contributions to Health Savings Accounts
  • Pre-tax accounts for Health FSA, Dependent Care FSA, and commuter expenses (parking and transit)
  • 401(k) retirement plan with employer match
  • Paid parental leave (up to 24 weeks for birth parents and 20 weeks for non-birthing parents), plus paid medical and caregiver leave (up to 8 weeks)
  • Paid time off: flexible PTO for exempt employees and up to 15 days annually for non-exempt employees
  • 13+ paid company holidays, and multiple paid coordinated company office closures throughout the year for focus and recharge, plus paid sick and safe time (1 hour per 30 hours worked)
  • Mental health and wellness support
  • Employer-paid basic life and disability coverage
  • Annual learning and development stipend to fuel your professional growth
  • Daily meals in our offices, and meal delivery credits as eligible
  • Relocation support for eligible employees
  • Additional taxable fringe benefits, such as charitable donation matching and wellness stipends

Additional Information:

Job Posted:
February 21, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Scientist, Financial Engineering

Senior Software Engineer

We are seeking a highly skilled senior software engineer to join our team. This ...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
rearc.io Logo
Rearc
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Deep expertise in Java: Proven proficiency in designing, developing, and optimizing high-performance, multithreaded Java applications
  • Comprehensive SDLC Experience: Extensive experience with the entire software development lifecycle, including requirements gathering, architectural design, coding, testing (unit, integration, performance), deployment, and maintenance
  • Data Engineering & Financial Data Processing: Proven experience in data engineering, including building, maintaining, and optimizing complex data pipelines for real-time and historical financial stock market data
  • Financial Market Acumen: A strong background in the financial industry, with a working knowledge of financial instruments, market data (e.g., tick data, OHLC), and common financial models
  • Problem-Solving & Adaptability: Excellent problem-solving skills and the ability to work with complex and evolving requirements
  • Collaboration & Communication: Superior communication skills, capable of collaborating effectively with quantitative analysts, data scientists, and business stakeholders
  • Testing & CI/CD: A strong ability to work on development and all forms of testing, with working knowledge of CI/CD pipelines and deployments
  • Database Proficiency: Experience with various database technologies (SQL and NoSQL) and the ability to design database schemas for efficient storage and retrieval of financial data
Job Responsibility
Job Responsibility
  • Design and Development: Architect, build, and maintain robust, scalable, and low-latency Java applications for processing real-time and historical financial stock market data
  • Data Pipeline Engineering: Engineer and manage sophisticated data pipelines using modern data technologies to ensure timely and accurate data availability for analytics and trading systems
  • Performance Optimization: Profile and optimize applications for maximum speed, scalability, and efficiency
  • System Integration: Integrate data from various financial market sources and ensure seamless data flow into downstream systems
  • Mentorship and Best Practices: Provide guidance and mentorship to other engineers, contribute to code reviews, and advocate for best practices
  • Operational Excellence: Participate in the full software development lifecycle, from initial design to production support, ensuring system reliability and performance
Read More
Arrow Right

Senior Python Data Scientist

The Senior Python Data Scientist role at Citi involves developing and implementi...
Location
Location
United Kingdom , London
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience in software engineering and development, and a strong understanding of computer systems and how they operate
  • Excellent Python programming skills, including experience with relevant analytical and machine learning libraries (e.g., pandas, polars, numpy, sklearn, TensorFlow/Keras, PyTorch, etc.), in addition to visualization and API libraries (matplotlib, plotly, streamlit, Flask, etc.)
  • Experience developing and implementing quantitative models from data in a financial context
  • Proficiency working with version control systems such as Git, and familiarity with Linux computing environments
  • Experience working with different database and messaging technologies such as SQL, KDB, MongoDB, Kafka, etc.
  • Familiarity with data visualization and ideally development of analytical dashboards using Python and BI tools
  • Excellent communication skills, both written and verbal, with the ability to convey complex information clearly and concisely to technical and non-technical audiences
  • Ideally, some experience working with CI/CD pipelines and containerization technologies like Docker and Kubernetes
  • Ideally, some familiarity with data workflow management tools such as Airflow as well as big data technologies such as Apache Spark/Ignite or other caching and analytics technologies
  • A working knowledge of FX markets and financial instruments would be beneficial.
Job Responsibility
Job Responsibility
  • Design, develop and implement quantitative models to derive insights from large and complex FX datasets, with a focus on understanding market trends and client behavior, identifying revenue opportunities, and optimizing the FX business
  • Engineer data and analytics pipelines using modern, cloud-native technologies and CI/CD workflows, focusing on consolidation, automation, and scalability
  • Collaborate with stakeholders across sales and trading to understand data needs, translate them into impactful data-driven solutions, and deliver these in partnership with technology
  • Develop and integrate functionality to ensure adherence with best-practices in terms of data management, need-to-know (NTK), and data governance
  • Contribute to shaping and executing the overall data strategy for FX in collaboration with the existing team and senior stakeholders.
What we offer
What we offer
  • 27 days annual leave (plus bank holidays)
  • A discretional annual performance related bonus
  • Private Medical Care & Life Insurance
  • Employee Assistance Program
  • Pension Plan
  • Paid Parental Leave
  • Special discounts for employees, family, and friends
  • Access to an array of learning and development resources.
  • Fulltime
Read More
Arrow Right

Data Engineer

This is a data engineer position - a programmer responsible for the design, deve...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5-8 years of experience in working in data eco systems
  • 4-5 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
  • 3+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing 'big data' data pipelines, architectures, and datasets
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
Job Responsibility
Job Responsibility
  • Ensuring high quality software development, with complete documentation and traceability
  • Develop and optimize scalable Spark Java-based data pipelines for processing and analyzing large scale financial data
  • Design and implement distributed computing solutions for risk modeling, pricing and regulatory compliance
  • Ensure efficient data storage and retrieval using Big Data
  • Implement best practices for spark performance tuning including partition, caching and memory management
  • Maintain high code quality through testing, CI/CD pipelines and version control (Git, Jenkins)
  • Work on batch processing frameworks for Market risk analytics
  • Promoting unit/functional testing and code inspection processes
  • Work with business stakeholders and Business Analysts to understand the requirements
  • Work with other data scientists to understand and interpret complex datasets
  • Fulltime
Read More
Arrow Right

GenAI Model Risk Data Scientist

The Model Risk Data Scientist position is for a professional who would like to m...
Location
Location
United States , Jersey City; New York
Salary
Salary:
142320.00 - 213480.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master's or advanced degree in quantitative fields such as Mathematics, Statistics, Financial Engineering, Quantitative Finance, Computer Science, Data Science, etc.
  • AI/ML development experience in a Data Scientist role or similar
  • NLP development experience in a Data Scientist role or similar
  • Knowledge of AI risk, safety, and ethics principals and hands-on experience with model validation in a financial institution
  • Experience in GenAI Model validation or AI/ML Model validation
  • Experience in a quantitative role in the Financial Markets/Banking/Insurance or experience in Risk capacity at a financial services / insurance institution
  • Strong organizational and project management skills
  • Risk and Controls mindset
Job Responsibility
Job Responsibility
  • Assist development teams with implementing new GenAI solutions from identification through validation phases
  • Collaborate with developers and business stakeholders on streamlining the adoption of GenAI within Citi
  • Own and maintain GenAI solution book of work for eligible GenAI use-cases
  • Assist with preparations for internal discussions and initial submission for model validation
  • Own relationships with governance-related stakeholders
  • Monitor and maintain GenAI solution inventory data and lifecycle
  • Educate development teams on the model validation process
  • Own tracking and modeling tools
What we offer
What we offer
  • medical, dental & vision coverage
  • 401(k)
  • life, accident, and disability insurance
  • wellness programs
  • paid time off packages including vacation, sick leave, and paid holidays
  • discretionary and formulaic incentive and retention awards
  • Fulltime
Read More
Arrow Right

Data Engineering Lead

Data Engineering Lead a strategic professional who stays abreast of developments...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
  • 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
  • Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
Job Responsibility
Job Responsibility
  • Strategic Leadership: Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy
  • Team Management: Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement
  • Architecture and Design: Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data
  • Technology Selection and Implementation: Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data
  • Performance Optimization: Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness, ensuring optimal access to global wealth data
  • Collaboration: Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions that support investment strategies and client reporting
  • Data Governance: Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations, particularly around sensitive financial data
  • Fulltime
Read More
Arrow Right

Data Engineering Lead

The Engineering Lead Analyst is a senior level position responsible for leading ...
Location
Location
Singapore , Singapore
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
  • 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
  • Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
Job Responsibility
Job Responsibility
  • Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy
  • Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement
  • Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data
  • Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data
  • Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness
  • Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions
  • Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations
What we offer
What we offer
  • Equal opportunity employer commitment
  • Accessibility and accommodation support
  • Global workforce benefits
  • Fulltime
Read More
Arrow Right

Data Engineer, Solutions Architecture

We are seeking a talented Data Engineer to design, build, and maintain our data ...
Location
Location
United States , Scottsdale
Salary
Salary:
90000.00 - 120000.00 USD / Year
clearwayenergy.com Logo
Clearway Energy
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2-4 years of hands-on data engineering experience in production environments
  • Bachelor's degree in Computer Science, Engineering, or a related field
  • Proficiency in Dagster or Airflow for pipeline scheduling, dependency management, and workflow automation
  • Advanced-level Snowflake administration, including virtual warehouses, clustering, security, and cost optimization
  • Proficiency in dbt for data modeling, testing, documentation, and version control of analytical transformations
  • Strong Python and SQL skills for data processing and automation
  • 1-2+ years of experience with continuous integration and continuous deployment practices and tools (Git, GitHub Actions, GitLab CI, or similar)
  • Advanced SQL skills, database design principles, and experience with multiple database platforms
  • Proficiency in AWS/Azure/GCP data services, storage solutions (S3, Azure Blob, GCS), and infrastructure as code
  • Experience with APIs, streaming platforms (Kafka, Kinesis), and various data connectors and formats
Job Responsibility
Job Responsibility
  • Design, deploy, and maintain scalable data infrastructure to support enterprise analytics and reporting needs
  • Manage Snowflake instances, including performance tuning, security configuration, and capacity planning for growing data volumes
  • Optimize query performance and resource utilization to control costs and improve processing speed
  • Build and orchestrate complex ETL/ELT workflows using Dagster to ensure reliable, automated data processing for asset management and energy trading
  • Develop robust data pipelines that handle high-volume, time-sensitive energy market data and asset generation and performance metrics
  • Implement workflow automation and dependency management for critical business operations
  • Develop and maintain dbt models to transform raw data into business-ready analytical datasets and dimensional models
  • Create efficient SQL-based transformations for complex energy market calculations and asset performance metrics
  • Support advanced analytics initiatives through proper data preparation and feature engineering
  • Implement comprehensive data validation, testing, and monitoring frameworks to ensure accuracy and consistency across all energy and financial data assets
What we offer
What we offer
  • generous PTO
  • medical, dental & vision care
  • HSAs with company contributions
  • health FSAs
  • dependent daycare FSAs
  • commuter benefits
  • relocation
  • a 401(k) plan with employer match
  • a variety of life & accident insurances
  • fertility programs
  • Fulltime
Read More
Arrow Right

Data Scientist

The FX Data Analytics & AI Technology team, within Citi's FX Technology organiza...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 12 to 18 years experience
  • Master’s degree or above (or equivalent education) in a STEM discipline
  • Proven experience in software engineering and development, and a strong understanding of computer systems and how they operate
  • Excellent Python programming skills, including experience with relevant analytical and machine learning libraries (e.g., pandas, polars, numpy, sklearn, TensorFlow/Keras, PyTorch, etc.), in addition to visualization and API libraries (matplotlib, plotly, streamlit, Flask, etc)
  • Understanding of Gen AI models, Vector databases, Agents and follow the market trends
  • Experience developing and implementing quantitative models from data in a financial context
  • Proficiency working with version control systems such as Git, and familiarity with Linux computing environments
  • Experience working with different database and messaging technologies such as SQL, KDB, MongoDB, Kafka, etc
  • Familiarity with data visualization and ideally development of analytical dashboards using Python and BI tools
  • Excellent communication skills, both written and verbal, with the ability to convey complex information clearly and concisely to technical and non-technical audiences
Job Responsibility
Job Responsibility
  • Design, develop and implement quantitative models to derive insights from large and complex FX datasets, with a focus on understanding market trends and client behavior, identifying revenue opportunities, and optimizing the FX business
  • Engineer data and analytics pipelines using modern, cloud-native technologies and CI/CD workflows, focusing on consolidation, automation, and scalability
  • Collaborate with stakeholders across sales and trading to understand data needs, translate them into impactful data-driven solutions, and deliver these in partnership with technology
  • Develop and integrate functionality to ensure adherence with best-practices in terms of data management, need-to-know (NTK), and data governance
  • Contribute to shaping and executing the overall data strategy for FX in collaboration with the existing team and senior stakeholders
What we offer
What we offer
  • Global Benefits
  • We bring the best to our people. We put our employees first and provide the best-in-class benefits they need to be well, live well and save well
  • Fulltime
Read More
Arrow Right