This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Fullstory Anywhere is one of Fullstory's three primary product verticals, and it's growing fast. We put Fullstory's rich digital experience data directly into customers' hands: in their warehouses, in their AI workflows, and in the tools their teams already use. As our Senior Software Engineer, Data Infrastructure & AI, you will report to the Senior Engineering Manager for the Fullstory Anywhere team and help build the systems that transform, move, and activate billions of data points at massive scale so that customers can unlock insights in their own environments and build intelligent agents on top of real user behavior. You will design and optimize pipelines that process 30 billion+ records per day across customer warehouses, collaborate with product and ML engineers to define how LLM-powered customer agents evaluate and act on Fullstory data, and make architectural decisions that balance throughput, cost, and reliability across a product vertical with accelerating revenue and adoption. To excel in this job, you must be comfortable owning large, ambiguous technical problems end-to-end, from initial design through production health, and know how to build data-intensive systems that stay reliable as they scale.
Job Responsibility:
Maintain, extend, and scale Go microservices that transform and deliver Fullstory session data into customer warehouses and power the team's MCP server that enables AI agent integrations.
Develop and maintain dbt models and pipeline orchestration to ensure timely, fault-tolerant data migrations across hundreds of customer destinations.
Define evaluation frameworks for LLM outputs using tools like Langsmith and Vertex AI, ensuring AI-powered customer agents produce accurate, useful results.
Investigate and resolve production incidents across the data pipeline, implementing systemic fixes that prevent entire classes of failure from recurring.
Write technical design documents that drive consensus on architectural changes, proactively surfacing scaling bottlenecks, edge cases, and cross-team dependencies.
Demonstrate sound technical judgment by de-risking work through spikes, taking on tech debt deliberately, and knowing when to escalate versus dig in.
Requirements:
Significant experience building and operating high-throughput data pipelines (batch and/or streaming) in a major cloud platform, including work with cloud data warehouses like BigQuery, Snowflake, or Databricks.
Proficiency in Go, Python, Java or a similar language.
Hands-on experience with data transformation tooling such as dbt, with a strong understanding of data modeling and pipeline observability.
Familiarity with LLM integration patterns and evaluation approaches (e.g., LangSmith, Vertex AI, or comparable frameworks), or demonstrated ability to ramp quickly in applied AI.
A track record of owning major system areas end-to-end: driving architectural decisions, maintaining production health, and improving reliability over time.
What we offer:
Flexibility and Connection
flexible PTO policy
annual company-wide closure
Benefits
paid parental leave
Bereavement leave, including miscarriage/pregnancy loss