CrawlJobs Logo

Senior Software Engineer, Data Infrastructure & AI

fullstory.com Logo

Fullstory

Location Icon

Location:
United States , Atlanta

Category Icon

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

160000.00 - 170000.00 USD / Year

Job Description:

Fullstory Anywhere is one of Fullstory's three primary product verticals, and it's growing fast. We put Fullstory's rich digital experience data directly into customers' hands: in their warehouses, in their AI workflows, and in the tools their teams already use. As our Senior Software Engineer, Data Infrastructure & AI, you will report to the Senior Engineering Manager for the Fullstory Anywhere team and help build the systems that transform, move, and activate billions of data points at massive scale so that customers can unlock insights in their own environments and build intelligent agents on top of real user behavior. You will design and optimize pipelines that process 30 billion+ records per day across customer warehouses, collaborate with product and ML engineers to define how LLM-powered customer agents evaluate and act on Fullstory data, and make architectural decisions that balance throughput, cost, and reliability across a product vertical with accelerating revenue and adoption. To excel in this job, you must be comfortable owning large, ambiguous technical problems end-to-end, from initial design through production health, and know how to build data-intensive systems that stay reliable as they scale.

Job Responsibility:

  • Maintain, extend, and scale Go microservices that transform and deliver Fullstory session data into customer warehouses and power the team's MCP server that enables AI agent integrations.
  • Develop and maintain dbt models and pipeline orchestration to ensure timely, fault-tolerant data migrations across hundreds of customer destinations.
  • Define evaluation frameworks for LLM outputs using tools like Langsmith and Vertex AI, ensuring AI-powered customer agents produce accurate, useful results.
  • Investigate and resolve production incidents across the data pipeline, implementing systemic fixes that prevent entire classes of failure from recurring.
  • Write technical design documents that drive consensus on architectural changes, proactively surfacing scaling bottlenecks, edge cases, and cross-team dependencies.
  • Demonstrate sound technical judgment by de-risking work through spikes, taking on tech debt deliberately, and knowing when to escalate versus dig in.

Requirements:

  • Significant experience building and operating high-throughput data pipelines (batch and/or streaming) in a major cloud platform, including work with cloud data warehouses like BigQuery, Snowflake, or Databricks.
  • Proficiency in Go, Python, Java or a similar language.
  • Hands-on experience with data transformation tooling such as dbt, with a strong understanding of data modeling and pipeline observability.
  • Familiarity with LLM integration patterns and evaluation approaches (e.g., LangSmith, Vertex AI, or comparable frameworks), or demonstrated ability to ramp quickly in applied AI.
  • A track record of owning major system areas end-to-end: driving architectural decisions, maintaining production health, and improving reliability over time.
What we offer:
  • Flexibility and Connection
  • flexible PTO policy
  • annual company-wide closure
  • Benefits
  • paid parental leave
  • Bereavement leave, including miscarriage/pregnancy loss
  • Learning opportunities
  • annual learning subsidy
  • Productivity support
  • monthly productivity stipend
  • Team Collaboration
  • team off-sites
  • annual full-company meet-up

Additional Information:

Job Posted:
May 04, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Software Engineer, Data Infrastructure & AI

Senior Software Engineer - AI

Senior Software Engineer role focused on AI and data-driven systems to transform...
Location
Location
Sweden , Malmö
Salary
Salary:
Not provided
https://www.ikea.com Logo
IKEA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Software development principles
  • Programming language skills
  • Experience with Python (object-oriented)
  • Experience with REST-based frameworks like FastAPI
  • Frontend development skills
  • Cloud platform experience (Azure preferred)
  • Infrastructure-as-code experience (Terraform)
  • GitHub Actions for automation
  • Testing and quality focus
  • Experience with SSO, permissions, and access control
Job Responsibility
Job Responsibility
  • Design and develop cloud-based products
  • Build and evolve global application using AI and data
  • Enrich content with meaningful metadata
  • Create solutions for presenting and managing product information
  • Collaborate with cross-functional Agile team
  • Implement digital solutions for omnichannel content
  • Fulltime
Read More
Arrow Right

Senior Software Engineer, Observability

The Observability team at Airtable ensures that engineers have the tools they ne...
Location
Location
United States , San Francisco; New York; Seattle
Salary
Salary:
196000.00 - 270000.00 USD / Year
airtable.com Logo
Airtable
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of software engineering experience
  • 3+ years focused on observability or infrastructure at scale
  • Demonstrated success implementing and running production-grade logging, metrics, or tracing systems
  • Proficiency in distributed systems concepts, data streaming pipelines, and container orchestration (Kubernetes)
  • Deep hands-on knowledge of tools such as Prometheus, Grafana, Datadog, OpenTelemetry, ELK Stack, Loki, or ClickHouse
  • Comfort with at least one programming language (e.g., Go, Python, Java) to build and maintain observability tooling
  • Experience mentoring engineers and collaborating across multiple teams
  • Strong communication skills
  • Eagerness to own high-impact initiatives
  • Proven ability to balance short-term fixes with long-term strategic vision
Job Responsibility
Job Responsibility
  • Architect and scale core observability systems
  • Lead the design and evolution of logging, metrics, and tracing pipelines
  • Evaluate and integrate new technologies (e.g., OpenTelemetry, ClickHouse, ELK stack)
  • Guide and mentor a growing team of infrastructure engineers
  • Define and uphold coding standards and operational excellence
  • Partner with Deploy Infrastructure, Service Orchestration, and Product teams
  • Align infrastructure decisions with business goals
  • Own end-to-end reliability for observability tools and establish SLAs, SLOs, and error budgets
  • Optimize performance and cost of large-scale data pipelines
  • Shape the observability roadmap
What we offer
What we offer
  • Opportunity to receive benefits
  • Restricted stock units
  • May include incentive compensation
  • Comprehensive benefit offerings
  • Fulltime
Read More
Arrow Right

Senior AI Engineer

As a Senior AI Engineer on our AI Engineering team, you will be responsible for ...
Location
Location
India
Salary
Salary:
Not provided
apollo.io Logo
Apollo.io
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of software engineering experience with a focus on production systems
  • 1.5+ years of hands-on LLM experience (2023-present) building real applications with GPT, Claude, Llama, or other modern LLMs
  • Demonstrated experience building customer-facing, scalable LLM-powered products with real user usage (not just POCs or internal tools)
  • Experience building multi-step AI agents, LLM chaining, and complex workflow automation
  • Deep understanding of prompting strategies, few-shot learning, chain-of-thought reasoning, and prompt optimization techniques
  • Expert-level Python skills for production AI systems
  • Strong experience building scalable backend systems, APIs, and distributed architectures
  • Experience with LangChain, LlamaIndex, or other LLM application frameworks
  • Proven ability to integrate multiple APIs and services to create advanced AI capabilities
  • Experience deploying and managing AI models in cloud environments (AWS, GCP, Azure)
Job Responsibility
Job Responsibility
  • Design and Deploy Production LLM Systems: Build scalable, reliable AI systems that serve millions of users with high availability and performance requirements
  • Agent Development: Create sophisticated AI agents that can chain multiple LLM calls, integrate with external APIs, and maintain state across complex workflows
  • Prompt Engineering Excellence: Develop and optimize prompting strategies, understand trade-offs between prompt engineering vs fine-tuning, and implement advanced prompting techniques
  • System Integration: Build robust APIs and integrate AI capabilities with existing Apollo infrastructure and external services
  • Evaluation & Quality Assurance: Implement comprehensive evaluation frameworks, A/B testing, and monitoring systems to ensure AI systems meet accuracy, safety, and reliability standards
  • Performance Optimization: Optimize for cost, latency, and scalability across different LLM providers and deployment scenarios
  • Cross-functional Collaboration: Work closely with product teams, backend engineers, and stakeholders to translate business requirements into technical AI solutions
What we offer
What we offer
  • Invest deeply in your growth, ensuring you have the resources, support, and autonomy to own your role and make a real impact
  • Collaboration is at our core—we’re all for one, meaning you’ll have a team across departments ready to help you succeed
  • We encourage bold ideas and courageous action, giving you the freedom to experiment, take smart risks, and drive big wins
Read More
Arrow Right

Senior AI Engineer

As a Senior AI Engineer on our AI Engineering team, you will be responsible for ...
Location
Location
Canada; United States
Salary
Salary:
160000.00 - 260000.00 USD / Year
apollo.io Logo
Apollo.io
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of software engineering experience with a focus on production systems
  • 1.5+ years of hands-on LLM experience (2023-present) building real applications with GPT, Claude, Llama, or other modern LLMs
  • Production LLM Applications: Demonstrated experience building customer-facing, scalable LLM-powered products with real user usage (not just POCs or internal tools)
  • Agent Development: Experience building multi-step AI agents, LLM chaining, and complex workflow automation
  • Prompt Engineering Expertise: Deep understanding of prompting strategies, few-shot learning, chain-of-thought reasoning, and prompt optimization techniques
  • Python Proficiency: Expert-level Python skills for production AI systems
  • Backend Engineering: Strong experience building scalable backend systems, APIs, and distributed architectures
  • LangChain or Similar Frameworks: Experience with LangChain, LlamaIndex, or other LLM application frameworks
  • API Integration: Proven ability to integrate multiple APIs and services to create advanced AI capabilities
  • Production Deployment: Experience deploying and managing AI models in cloud environments (AWS, GCP, Azure)
Job Responsibility
Job Responsibility
  • Design and Deploy Production LLM Systems: Build scalable, reliable AI systems that serve millions of users with high availability and performance requirements
  • Agent Development: Create sophisticated AI agents that can chain multiple LLM calls, integrate with external APIs, and maintain state across complex workflows
  • Prompt Engineering Excellence: Develop and optimize prompting strategies, understand trade-offs between prompt engineering vs fine-tuning, and implement advanced prompting techniques
  • System Integration: Build robust APIs and integrate AI capabilities with existing Apollo infrastructure and external services
  • Evaluation & Quality Assurance: Implement comprehensive evaluation frameworks, A/B testing, and monitoring systems to ensure AI systems meet accuracy, safety, and reliability standards
  • Performance Optimization: Optimize for cost, latency, and scalability across different LLM providers and deployment scenarios
  • Cross-functional Collaboration: Work closely with product teams, backend engineers, and stakeholders to translate business requirements into technical AI solutions
What we offer
What we offer
  • equity
  • company bonus or sales commissions/bonuses
  • 401(k) plan
  • at least 10 paid holidays per year, flex PTO, and parental leave
  • employee assistance program and wellbeing benefits
  • global travel coverage
  • life/AD&D/STD/LTD insurance
  • FSA/HSA and medical, dental, and vision benefits
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

As a Senior Software Engineer, you will play a key role in designing and buildin...
Location
Location
United States
Salary
Salary:
156000.00 - 195000.00 USD / Year
apollo.io Logo
Apollo.io
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years experience in platform engineering, data engineering or in a data facing role
  • Experience in building data applications
  • Deep knowledge of data eco system with an ability to collaborate cross-functionally
  • Bachelor's degree in a quantitative field (Physical / Computer Science, Engineering or Mathematics / Statistics)
  • Excellent communication skills
  • Self-motivated and self-directed
  • Inquisitive, able to ask questions and dig deeper
  • Organized, diligent, and great attention to detail
  • Acts with the utmost integrity
  • Genuinely curious and open
Job Responsibility
Job Responsibility
  • Architect and build robust, scalable data pipelines (batch and streaming) to support a variety of internal and external use cases
  • Develop and maintain high-performance APIs using FastAPI to expose data services and automate data workflows
  • Design and manage cloud-based data infrastructure, optimizing for cost, performance, and reliability
  • Collaborate closely with software engineers, data scientists, analysts, and product teams to translate requirements into engineering solutions
  • Monitor and ensure the health, quality, and reliability of data flows and platform services
  • Implement observability and alerting for data services and APIs (think logs, metrics, dashboards)
  • Continuously evaluate and integrate new tools and technologies to improve platform capabilities
  • Contribute to architectural discussions, code reviews, and cross-functional projects
  • Document your work, champion best practices, and help level up the team through knowledge sharing
What we offer
What we offer
  • Equity
  • Company bonus or sales commissions/bonuses
  • 401(k) plan
  • At least 10 paid holidays per year
  • Flex PTO
  • Parental leave
  • Employee assistance program and wellbeing benefits
  • Global travel coverage
  • Life/AD&D/STD/LTD insurance
  • FSA/HSA and medical, dental, and vision benefits
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

The Data Engineer is responsible for designing, building, and maintaining robust...
Location
Location
Germany , Berlin
Salary
Salary:
Not provided
ibvogt.com Logo
ib vogt GmbH
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Degree in Computer Science, Data Engineering, or related field
  • 5+ years of experience in data engineering or similar roles
  • experience in renewable energy, engineering, or asset-heavy industries is a plus
  • Strong experience with modern data stack (e.g., PowerPlatform, Azure Data Factory, Databricks, Airflow, dbt, Synapse, Snowflake, BigQuery, etc.)
  • Proficiency in Python and SQL for data transformation and automation
  • Experience with APIs, message queues (Kafka, Event Hub), data streaming and knowledge of data lakehouse and data warehouse architectures
  • Familiarity with CI/CD pipelines, DevOps practices, and containerization (Docker, Kubernetes)
  • Understanding of cloud environments (preferably Microsoft Azure, PowerPlatform)
  • Strong analytical mindset and problem-solving attitude paired with a structured, detail-oriented, and documentation-driven work style
  • Team-oriented approach and excellent communication skills in English
Job Responsibility
Job Responsibility
  • Design, implement, and maintain efficient ETL/ELT data pipelines connecting internal systems (M365, Sharepoint, ERP, CRM, SCADA, O&M, etc.) and external data sources
  • Integrate structured and unstructured data from multiple sources into the central data lake / warehouse / Dataverse
  • Build data models and transformation workflows to support analytics, reporting, and AI/ML use cases
  • Implement data quality checks, validation rules, and metadata management according to the company’s data governance framework
  • Automate workflows, optimize performance, and ensure scalability of data pipelines and processing infrastructure
  • Work closely with Data Scientists, Software Engineers, and Domain Experts to deliver reliable datasets for Digital Twin and AI applications
  • Maintain clear documentation of data flows, schemas, and operational processes
What we offer
What we offer
  • Competitive remuneration and motivating benefits
  • Opportunity to shape the data foundation of ib vogt’s digital transformation journey
  • Work on cutting-edge data platforms supporting real-world renewable energy assets
  • A truly international working environment with colleagues from all over the world
  • An open-minded, collaborative, dynamic, and highly motivated team
  • Fulltime
Read More
Arrow Right

Senior Software Engineer, Forward Deployed

As a Senior Software Engineer, Forward Deployed Engineer (FDE) you'll work direc...
Location
Location
United States , Austin; New York; San Francisco Bay Area; Washington DC–Baltimore
Salary
Salary:
165000.00 - 266000.00 USD / Year
invisible.co Logo
Invisible Technologies
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of software engineering experience, including significant time spent building data, ML, or backend systems
  • Deep proficiency in Python with hands-on experience using Hugging Face, LangChain, OpenAI, Pinecone, and related ecosystems
  • Skilled in full-stack and API-based deployment patterns, including Docker, FastAPI, Kubernetes, and cloud environments (GCP, AWS)
  • Experienced with workflow orchestration libraries, pub/sub systems (Kafka), and schema governance
  • Expertise in data governance and operations, including Unity Catalog and policy management, cluster/job orchestration, data contracts and quality enforcement, Delta/ETL pipelines, and replay processes
  • Strong product and system design instincts — you understand business needs and how to translate them into technical architecture
  • Experience building usable systems from messy data and ambiguous requirements
  • Excellent communication and client-facing skills
  • you’ve led conversations with technical and non-technical stakeholders alike
  • Proven experience owning projects from scoping through deployment in ambiguous, high-stakes environments
Job Responsibility
Job Responsibility
  • Collaborate with delivery leaders to scope technical solutions to operational problems
  • Identify workflow optimizations through deep engagement with customer problems and work to build into a stable and scalable solution
  • Design and implement AI-powered workflows using LLMs, embedding models, retrieval systems, and automation tools
  • Translate messy real-world constraints (e.g., inconsistent data, latency requirements) into elegant engineering solutions
  • Iterate quickly based on real-time feedback from operators and clients
  • Build reusable tooling and infrastructure that accelerates future deployments
What we offer
What we offer
  • Bonuses and equity are included in offers above entry level
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Kiddom is redefining how technology powers learning. We combine world-class curr...
Location
Location
United States , San Francisco
Salary
Salary:
150000.00 - 220000.00 USD / Year
kiddom.co Logo
Kiddom
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience as a data engineer
  • 8+ years of software engineering experience (including data engineering)
  • Proven experience as a Data Engineer or in a similar role with strong data modeling, architecture, and design skills
  • Strong understanding of data engineering principles including infrastructure deployment, governance and security
  • Experience with MySQL, Snowflake, Cassandra and familiarity with Graph databases. (Neptune or Neo4J)
  • Proficiency in SQL, Python, (Golang)
  • Proficient with AWS offerings such as AWS Glue, EKS, ECS and Lambda
  • Excellent communication skills, with the ability to articulate complex technical concepts to non-technical stakeholders
  • Strong understanding of PII compliance and best practices in data handling and storage
  • Strong problem-solving skills, with a knack for optimizing performance and ensuring data integrity and accuracy
Job Responsibility
Job Responsibility
  • Design, implement, and maintain the organization’s data infrastructure, ensuring it meets business requirements and technical standards
  • Deploy data pipelines to AWS infrastructure such as EKS, ECS, Lambdas and AWS Glue
  • Develop and deploy data pipelines to clean and transform data to support other engineering teams, analytics and AI applications
  • Extract and deploy reusable features to Feature stores such as Feast or equivalent
  • Evaluate and select appropriate database technologies, tools, and platforms, both on-premises and in the cloud
  • Monitor data systems and troubleshoot issues related to data quality, performance, and integrity
  • Work closely with other departments, including Product, Engineering, and Analytics, to understand and cater to their data needs
  • Define and document data workflows, pipelines, and transformation processes for clear understanding and knowledge sharing
What we offer
What we offer
  • Meaningful equity
  • Health insurance benefits: medical (various PPO/HMO/HSA plans), dental, vision, disability and life insurance
  • One Medical membership (in participating locations)
  • Flexible vacation time policy (subject to internal approval). Average use 4 weeks off per year
  • 10 paid sick days per year (pro rated depending on start date)
  • Paid holidays
  • Paid bereavement leave
  • Paid family leave after birth/adoption. Minimum of 16 paid weeks for birthing parents, 10 weeks for caretaker parents. Meant to supplement benefits offered by State
  • Commuter and FSA plans
  • Fulltime
Read More
Arrow Right