This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are looking for a Senior Solutions Engineer who blends deep technical understanding of AI/ML infrastructure with excellent communication and solution-building skills. In this role, you will serve as a trusted advisor to prospective customers, design partners, and strategic accounts—bridging the gap between cutting-edge AI engineering and real-world business use cases. This role is ideal for someone who thrives at the intersection of technical depth and customer interaction, and who enjoys crafting solutions, demos, and integrations that showcase LanceDB’s strengths in production environments.
Job Responsibility:
Serve as the technical lead in pre-sales conversations—partnering with account executives to scope, solution, and articulate the value of LanceDB for customer-specific workflows
Lead technical discovery and architecture design sessions with prospects across verticals including AI infra, LLM ops, and multimodal data pipelines
Build and deliver custom demos and proof-of-concepts to highlight how LanceDB solves challenging RAG, vector search, and feature engineering problems
Act as the bridge between customer pain points and our engineering/product teams—informing roadmap priorities with real-world feedback
Partner closely with design partners and early adopters to ensure successful onboarding and expansion
Champion a superior developer experience with a sharp focus on documentation, SDK ergonomics, and integration workflows
Requirements:
Thrive in a fast-paced, startup environment and enjoy working with high-caliber teams
5+ years of experience in a Sales Engineer, Solutions Engineer, ML Engineer, or AI Infrastructure role, supporting AI/ML products or platforms
Strong knowledge of AI/ML frameworks like PyTorch or TensorFlow, and how they integrate with infrastructure for model training, fine-tuning, and inference
Hands-on experience working with distributed systems such as Ray, Spark, or Kubernetes
Familiarity with cloud services (AWS, GCP, Azure) including compute and storage (e.g., EC2, GKE, S3)
Confident communicating with both technical and non-technical stakeholders, and able to translate complex infrastructure into actionable solutions
Must be based out of the San Francisco Bay Area, and be willing to travel to customer sites as needed
Nice to have:
Experience building or supporting feature engineering workflows or vector search pipelines
Worked with feature stores (e.g., Feast, Tecton) or have designed custom ML feature pipelines
Experience in observability and monitoring (Prometheus, Grafana, ELK/EFK)
Familiar with open-source data/streaming frameworks such as Apache Spark, Flink, Delta Lake, Kafka, or Airflow
Deep Python skills or are curious about Rust
Comfortable creating technical content, workshops, or presenting at meetups/conferences
Experience deploying ML infrastructure in customer environments using tools like Terraform, Docker, and CI/CD pipelines
Supported enterprise customers or worked in a customer-facing technical capacity before
What we offer:
Equity
Commission
Medical, dental, vision, and life insurance
401(k) retirement plan
Flexible Spending Accounts (FSA) and Health Savings Accounts (HSA)