This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Brightwheel is seeking a Staff Data Engineer and technical lead on our Data Engineering team. As a Staff Data Engineer at brightwheel, you will architect and drive the evolution of our data platform, partnering with technical leadership to shape our data and AI strategy. You will design and scale sophisticated data pipelines processing billions of records across diverse systems, powering analytics for internal teams, customer-facing insights, and AI/ML capabilities that differentiate our product.
Job Responsibility:
Architect and lead the evolution of our modern data platform, driving technical decisions on tooling, infrastructure patterns, and scalability strategies
Design and build production LLM pipelines and infrastructure that power intelligent operations
Own end-to-end data acquisition and integration architecture across diverse sources
Create shared abstractions and tooling for AI
Shape our data and system architecture so AI can safely stitch together longitudinal signals across product, billing, support, and operations
Lead by example in AI-augmented engineering, using AI to multiply your own speed, mentoring L2/L3 engineers, and raising the bar for how we design, ship, and operate AI-powered features
Mentor and influence engineering culture, conducting design reviews, providing technical guidance to engineers across the organization, and championing data platform adoption and best practices
Requirements:
6+ years of work experience as a data engineer, backend engineer, full stack or DevOps engineer with strong proficiency in Python and modern data engineering practices
Applied AI impact at scale: Proven track record of shipping AI / LLM-powered features into production with clear, measurable impact on key metrics
Hands-on experience with large language models (LLMs) in real applications, including prompt and tool design, retrieval-style patterns, and evaluation and monitoring in production
Strong computer science fundamentals (e.g., data structures, algorithms, and systems design) and a generalist mindset
Experience designing, developing, and deploying ML/LLM/AI pipelines in production environments, including experience with model serving, feature engineering, and MLOps practices
Expert-level understanding of distributed data processing technologies and their internals
Proven track record of independently architecting scalable data solutions, from requirements gathering and technical design through implementation and cost optimization
Nice to have:
Proven track record of technical leadership, including mentoring senior engineers, driving engineering standards and best practices, and influencing data platform strategy across the organization
Hands-on experience architecting federated query engines (DuckDB, Trino, Presto, Starburst) over lakehouse platforms, including catalog integration, query optimization strategies, and cost-effective compute scaling patterns
Deep expertise building orchestration platforms with Airflow (or similar), including custom operators, dynamic DAG generation, and framework-level optimizations for complex dependency management
Advanced experience with serverless and event-driven architectures, including designing systems that leverage AWS Lambda, Step Functions, EventBridge, or Databricks workflows for cost-efficient, auto-scaling data processing
Experience building customer-facing embedded analytics solutions (Cube.js, Metabase, Superset, or similar) with complex data modeling, access control, and performance optimization
What we offer:
Comprehensive medical, dental, and vision coverage
Generous paid parental leave
Flexible PTO
Local retirement or savings plans (e.g., 401(k) in the U.S.)