CrawlJobs Logo

Product Engineer, Post Batch

helpcare.ai Logo

Helpcare AI

Location Icon

Location:
United States , San Francisco

Category Icon

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

185000.00 - 300000.00 USD / Year

Job Description:

Y Combinator is looking for a Product Engineer to help scale the impact of its post-batch resources, including the YC-only hiring platform (workatastartup.com), event planning for alumni founders, and internal software platform. You will work with a small software team, have autonomy, and contribute to a monolith codebase. The team is knee-deep in applying AI tools across product areas.

Job Responsibility:

  • Design and build out new features in entirety with a focus on workatastartup.com (hiring) and internal software
  • Scope and prioritize items on our post-batch roadmap alongside YC partners and others on the YC software team
  • Analyze usage and engagement from founders, job-seekers and other users to understand what’s working/not working

Requirements:

  • 3+ years experience
  • JavaScript
  • React
  • SCSS
  • TypeScript
  • Ruby on Rails
  • SQL
  • Amazon Web Services (AWS)
  • US citizen/visa only
  • Ability to design and build new features
  • Ability to scope and prioritize roadmap items
  • Ability to analyze usage and engagement
  • Experience writing impressive software
  • Love building full stack web apps
  • Pragmatic builder excited about shipping and impact
  • Experience in product + engineering roles
  • Optimistic about technology's future
  • Good judgement and ability to make trade-offs
  • Trustworthiness with sensitive information

Nice to have:

  • Former founder or founding engineer
  • Experience with Rails, React and Postgres
  • Experience talking to customers
What we offer:
  • Carry in the YC fund
  • Medical, vision, and dental plans
  • Infertility benefit
  • STD/LTD
  • Life insurance
  • Commuter benefits
  • Flexible spending account
  • Health savings account
  • 401(k) + 4% matching
  • Generous parental leave
  • Paid holidays
  • Flexible paid time off policy
  • Employer willing to sponsor certain employment visas

Additional Information:

Job Posted:
December 31, 2025

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Product Engineer, Post Batch

Senior Machine Learning Engineer

As a Senior Machine Learning Engineer, you will take end-to-end ownership of the...
Location
Location
Canada
Salary
Salary:
128000.00 - 160000.00 CAD / Year
freshbooks.com Logo
FreshBooks
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in data science, applied ML, or ML engineering roles
  • Strong background in supervised and unsupervised learning, statistical modeling, and experimentation techniques
  • Proven experience developing and shipping ML models in production environments (batch or real-time)
  • Strong Python and SQL skills
  • comfort working with structured and unstructured data
  • Hands-on experience building and deploying ML or LLM-based systems (e.g. retrieval-augmented generation, embeddings, prompt tuning)
  • Familiarity with cloud infrastructure and ML tools, ideally on Google Cloud Platform (e.g. Vertex AI, BigQuery, Cloud Composer, Kubernetes)
  • Experience working with CI/CD pipelines, containerization (Docker), and job orchestration tools (Airflow, dbt, etc.)
  • Deep understanding of end-to-end ML operations including model observability, model drift detection, and model performance optimization
  • Strong communication skills and ability to explain technical concepts to non-technical stakeholders
Job Responsibility
Job Responsibility
  • Design, prototype, and validate machine learning models to power product features or internal tools
  • Own and lead all phases of the ML lifecycle from experimentation through to production deployment and model monitoring
  • Collaborate with Data Engineers and Product Engineers to integrate models into production infrastructure (batch and online serving)
  • Develop and prototype features for the shared feature store, including documentation, versioning, and consistency validation
  • Author high-quality, production-ready code with appropriate tests, observability, and monitoring hooks
  • Design experiments (e.g. A/B tests, pre-post analyses) and interpret results to guide product and business decisions
  • Design and build end-to-end pipelines for classification, ranking, embeddings, or generation tasks
  • Drive reliability practices in deployed models, including retraining logic, alerting on drift, and root cause analysis
  • Work closely with product and engineering stakeholders to align ML work with business priorities
  • Contribute to standards and documentation, mentor junior team members, and help shape our evolving ML platform
  • Fulltime
Read More
Arrow Right

Data Integration Engineer

We are hiring a Senior Batch/Data Integration Engineer to design and implement b...
Location
Location
United States , Philadelphia
Salary
Salary:
55.00 - 64.00 USD / Hour
bhsg.com Logo
Beacon Hill
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience designing and supporting enterprise batch processing systems
  • Strong experience with configurable, parameter‑driven job execution and large‑scale file delivery
  • Proven ability to manage dependencies across multiple data platforms and teams
Job Responsibility
Job Responsibility
  • Enhance batch processing frameworks to support configurable data inclusion, conditional execution, and parameter‑driven output
  • Implement and maintain job parameters, control frameworks, and reusable batch components to support multiple downstream consumers
  • Design and implement output layout changes, file naming conventions, and delivery schedules in coordination with implementation teams
  • Collaborate with upstream and downstream engineering teams to ensure end‑to‑end integration and data consistency
  • Support production readiness, operational validation, and post‑deployment stability of batch workloads
Read More
Arrow Right

Staff Software Engineer, Backend (AI Platform)

Cresta is on a mission to turn every customer conversation into a competitive ad...
Location
Location
United States
Salary
Salary:
Not provided
cresta.com Logo
Cresta
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years writing production software
  • 2+ years focused on ML platform or infra
  • Expert Python (async, typing, packaging, performance)
  • Working Golang knowledge for systems components
  • Proven experience with one or more serving frameworks (e.g., vLLM, Triton, TorchServe)
  • Kubernetes and cloud-native ops
  • Solid grasp of distributed systems, networking, and container security
  • Culture of rigorous testing, code review, and continuous delivery
Job Responsibility
Job Responsibility
  • Own model serving: Design, build, and maintain low-latency, highly-available serving stacks for in-house ML model serving and integrating with LLM serving partners
  • Automate training pipelines: Orchestrate data prep, training, evaluation, and registry workflows on Kubernetes with solid MLOps practices
  • Optimize at scale: Profile and tune throughput, memory, and cost
  • introduce caching, sharding, batching, and GPU/CPU autoscaling where it pays off
  • Build platform primitives: Create reusable SDKs, templates, and CLI tools that let research and product teams ship models independently and safely
  • Raise the bar: Instrument deep observability (tracing, metrics, alerts), drive blameless post-mortems, and mentor engineers on production ML best practices
What we offer
What we offer
  • Comprehensive medical, dental, and vision coverage with plans to fit you and your family
  • Flexible PTO to take the time you need, when you need it
  • Paid parental leave for all new parents welcoming a new child
  • Retirement savings plan to help you plan for the future
  • Remote work setup budget to help you create a productive home office
  • Monthly wellness and communication stipend to keep you connected and balanced
  • In-office meal program and commuter benefits provided for onsite employees
Read More
Arrow Right

Senior Machine Learning Engineer, LLMs and Clinical NLP

We are on a mission to ensure everyone has access to medical expertise, no matte...
Location
Location
Denmark , København
Salary
Salary:
Not provided
life-science-talent-solutions.dk Logo
Life Science Talent
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong programming skills in Python and the ability to contribute to production-grade codebases
  • Hands-on experience with LLMs for NLP or text generation, including at least some of the following: Training, fine-tuning, or post-training transformer-based models
  • Building or operating LLM inference services in production, including performance work
  • Designing robust evaluations for generative systems, including metrics, error analysis, and human evaluation methods
  • Experience turning research outcomes into practical systems that can be validated and shipped
  • Familiarity with building ML systems beyond notebooks, such as data pipelines, CI/CD practices, monitoring, and deployment workflows
  • Clear communication and collaboration skills across research, engineering, and product
  • A Master’s degree in computer science, engineering, mathematics, statistics, physics, or a related field, or equivalent professional experience
Job Responsibility
Job Responsibility
  • Build and improve LLM-based clinical NLP systems, including summarization, structured extraction, and controlled generation
  • Train, finetune, and post-train LLMs using approaches such as supervised finetuning and preference or feedback-driven optimization where appropriate
  • Design evaluation strategies for clinical text generation, including offline benchmarks, human review workflows, slice-based analysis, and quality gates aligned with clinical risk
  • Develop and operate LLM inference services using vLLM, with focus on reliability, scalability, and practical performance
  • Optimize inference for latency, throughput, and cost, for example batching, caching, quantization, and decoding strategy improvements
  • Build and maintain APIs and services using FastAPI, and deploy and run them on Kubernetes
  • Take technical ownership of core NLP components, shaping best practices for model development, evaluation, and production reliability across the team, and supporting the growth of engineers working on text generation systems
  • Partner with researchers, engineers, and product teams to ship improvements end-to-end, including observability and monitoring to support continuous iteration
What we offer
What we offer
  • Equipment provided by Corti
  • Fulltime
Read More
Arrow Right

Senior Staff Product Manager — Apache Iceberg & Open Table Formats

The Senior Staff Product Manager for Apache Iceberg & Open Table Formats will ow...
Location
Location
United States
Salary
Salary:
Not provided
teradata.com Logo
Teradata
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 12+ years of product management experience in data infrastructure, databases, data platforms, or analytics, with a track record of shipping platform-level capabilities
  • Deep familiarity with Apache Iceberg or comparable open table formats (Apache Hudi, Delta Lake), including understanding of metadata design, catalog architecture, and query optimization
  • Demonstrated experience in open-source community engagement—contributing to or leading initiatives within Apache Software Foundation projects or similar open-source ecosystems
  • Strong technical fluency in lakehouse architecture, including concepts like ACID transactions on object storage, schema evolution, partition evolution, snapshot isolation, and compute-storage separation
  • Proven ability to lead cross-functional teams across engineering, developer relations, and go-to-market functions
  • Hands-on experience developing agentic AI systems and successfully bringing agent-driven solutions from concept to market
  • Foundational AI skills and the ability to understand how AI can be applied to improve outcomes in your area of expertise
Job Responsibility
Job Responsibility
  • Serve as Teradata’s primary representative and advocate within the Apache Iceberg open-source community, building trust and credibility with contributors, committers, and the Apache Software Foundation (ASF)
  • Develop and execute a developer community strategy that grows Teradata’s mindshare among data engineers, lakehouse architects, and open-source contributors working with Apache Iceberg
  • Build and nurture relationships with the Apache Software Foundation, including participation in Iceberg PMC discussions, contributing to project governance, and representing Teradata’s interests in community roadmap decisions
  • Drive external thought leadership through conference talks (e.g., ApacheCon, Data + AI Summit, Subsurface), blog posts, technical papers, and social media engagement on Iceberg-related topics
  • Collaborate with partners and ecosystem vendors (e.g., cloud providers, compute engine vendors, catalog providers) to ensure Teradata’s Iceberg implementation is interoperable and well-positioned in the broader lakehouse ecosystem
  • Create and maintain developer-facing content including tutorials, reference architectures, and best-practice guides for using Iceberg with Teradata
  • Own the product roadmap for Iceberg integration within the Teradata platform, covering Iceberg read/write operations, catalog interoperability, metadata management, schema evolution, and partition optimization
  • Partner closely with database engineering teams to identify and drive performance improvements for Iceberg workloads, including query planning, predicate pushdown, data skipping, compaction, and table maintenance operations
  • Define product requirements for Iceberg-native capabilities such as time travel, snapshot isolation, branching and tagging, and hidden partitioning within the Teradata ecosystem
  • Conduct competitive analysis of Iceberg implementations across the industry (e.g., Snowflake, Databricks, Dremio, Cloudera) and translate insights into prioritized product investments
What we offer
What we offer
  • We prioritize a people-first culture
  • We embrace a flexible work model
  • We focus on well-being
  • We are an anti-racist company
  • We foster an equitable environment that celebrates people for all of who they are
  • Fulltime
Read More
Arrow Right

Senior Software Engineer, ML Platform

We’re looking for a software engineer to join Parafin’s Infrastructure team and ...
Location
Location
United States , San Francisco
Salary
Salary:
230000.00 - 265000.00 USD / Year
parafin.com Logo
Parafin
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of software engineering experience, including experience on ML platform/MLOps systems (training, deployment, and/or feature pipelines)
  • Strong Python
  • solid software design and testing fundamentals
  • Proficiency with SQL
  • hands-on Spark/PySpark experience
  • Knowledge of ML fundamentals—probability & statistics, supervised vs. unsupervised learning, bias/variance & regularization, feature engineering, model evaluation metrics, validation strategies, and production concerns like drift, stability, and monitoring
  • Expertise with modern data/ML stacks—AWS, Databricks (workflows, lakehouse, MLflow/registry, Model Serving), and Airflow (or equivalent orchestration)
  • Experience building real-time systems (service design, caching, rate limiting, backpressure) and batch pipelines at scale
  • Practical knowledge of feature-store concepts (offline/online stores, backfills, point-in-time correctness), model registries, experiment tracking, and evaluation frameworks
  • Strong problem-solving skills and a proactive attitude toward ownership and platform health
Job Responsibility
Job Responsibility
  • Turn notebooks into software
  • Decompose data scientist training/inference notebooks into reusable, tested components (libraries, pipelines, templates) with clear interfaces and documentation
  • Create developer-friendly ML abstractions
  • Build SDKs, CLIs, and templates that make it simple to define features, train/evaluate models, and deploy to batch or real-time targets with minimal boilerplate
  • Build our real-time ML inference platform
  • Stand up and scale low-latency model serving
  • Expand batch ML inference
  • Improve scheduling, parallelism, cost controls, observability, and failure/rollback for large-scale batch scoring and post-processing
  • Own and expand the feature store
  • Design offline/online feature definitions, high read/write throughput, and consistent offline/online semantics
What we offer
What we offer
  • Equity grant
  • Medical, dental & vision insurance
  • Work from home flexibility
  • Unlimited PTO
  • Commuter benefits
  • Free lunches
  • Paid parental leave
  • 401(k)
  • Employee assistance program
  • Fulltime
Read More
Arrow Right

Senior Software Engineer

Collaborate with backend, product, and quality engineering teams to build and de...
Location
Location
United States , Oakland
Salary
Salary:
213600.00 - 215000.00 USD / Year
siriusxm.com Logo
SiriusXM
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master’s degree in Computer Science, Data Science or Engineering (Computer or Electrical), plus two years of experience in the position offered or as Software Developer or Software/Systems Engineer
  • Bachelor’s degree in Computer Science, Data Science or Engineering (Computer or Electrical), plus five years of progressive post-Bachelor’s experience in the position offered or as Software Developer or Software/Systems Engineer
  • contributing to detailed project design and ensuring alignment with project objectives and client requirements across all phases of the Software Development Life Cycle (SDLC)
  • communicating with business stakeholders and refining their requests into technical specifications
  • developing back-end applications with Soap architecture
  • applying system design and object-oriented design concepts
  • programming in Java and working with SQL distributed databases
  • applying continuous integration and testing into the development process
  • developing robust web service modules within Enterprise applications using RAD (Rapid Application Development), leveraging industry-leading tools and methodologies
  • implementing database functions, procedures, and triggers, enhancing data management capabilities and ensuring optimal performance
Job Responsibility
Job Responsibility
  • Collaborate with backend, product, and quality engineering teams to build and deliver data products for stakeholders
  • Work with large, complex datasets and translate functional business requirements into scalable, high-quality engineering solutions
  • Design, implement, and optimize data pipelines to support both real-time and batch processing workflows
  • Develop and maintain high-performance, distributed databases to support analytics and data applications
  • Evaluate and integrate emerging technologies and tools that enhance data engineering efficiency and system reliability
  • Create and maintain technical documentation covering architecture, data lineage, and engineering standards
  • Lead root-cause analysis and resolution of data issues, driving long-term fixes and preventative measures
  • Actively participate in sprint planning, code reviews, and technical design discussions to ensure quality and architectural alignment
  • Fulltime
Read More
Arrow Right

GTM Engineer – CRM & Integrations

This is the CRM and integrations engineering role on our GTM Engineering team. A...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
coralogix.com Logo
Coralogix
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5–8+ years of hands-on Salesforce development experience in Sales Cloud and complex multi-cloud environments
  • CPQ implementation experience strongly preferred — Nue.io, Salesforce CPQ, DealHub, or similar
  • willingness to develop deep Nue.io expertise rapidly is a must
  • Strong production Apex experience: trigger frameworks, separation-of-concerns patterns, bulkification, governor limit management
  • Experience building LWC components and front-end Salesforce interfaces
  • Hands-on REST/SOAP integration experience including Named Credentials, OAuth, and middleware orchestration
  • Strong understanding of Salesforce data model, sharing model, and security architecture
  • Experience with SFDX, Git-based workflows, and CI/CD deployment pipelines (Copado, Gearset, or GitHub Actions)
  • Experience working with Salesforce APIs (Bulk API, Streaming API, Metadata API) for cross-system integrations
  • Exposure to AI-assisted development tools and automation orchestration platforms a plus
Job Responsibility
Job Responsibility
  • Implement Nue.io CPQ as the primary builder — configuring and engineering product catalog, pricing rules, discount frameworks, quoting workflows, and approval processes
  • Build Nue.io integrations with Salesforce CRM, DocuSign CLM, billing systems, and finance/ERP stack
  • Develop quoting automation, contract generation, and order management workflows
  • Partner with Finance and Revenue Operations to encode pricing logic and deal structure rules into the CPQ system
  • Own post-go-live CPQ maintenance and iteration as the revenue model evolves
  • Implement scalable Salesforce automation and custom logic aligned with architectural standards set by the Salesforce Architect
  • Write production-grade Apex code including trigger frameworks, batch jobs, queueable processes, and platform events
  • Build and maintain Lightning Web Components (LWC) for internal GTM tool interfaces
  • Design and maintain complex object models, data relationships, permission structures, and sharing models
  • Support sandbox management, structured deployments, and CI/CD processes
  • Fulltime
Read More
Arrow Right