This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a Staff Data Engineer to architect and lead our entire data infrastructure strategy—a rare opportunity to be the most senior data technical leader shaping how data flows through our organization. In this role, you won't just build systems; you'll define the vision, set the technical direction, and establish the standards that will guide our data practice for years to come.
Job Responsibility:
Design, build, and maintain scalable, reliable data pipelines and infrastructure to support analytics, operations, and product use cases
Develop and evolve dbt models, semantic layers, and data marts that enable trustworthy, self-serve analytics across the business
Collaborate with non-technical stakeholders to deeply understand their business needs and translate them into well-defined metrics and analytical tools
Lead architectural decisions for our data platform, ensuring it is performant, maintainable, and aligned with future growth
Build and maintain data orchestration and transformation workflows using tools like Airflow, dbt, and Snowflake (or equivalent)
Champion data quality, documentation, and observability to ensure high trust in data across the organization
Mentor and guide other engineers and analysts, promoting best practices in both data engineering and analytics engineering disciplines
Requirements:
7-10 years of experience in Data Engineering
Expertise in building and maintaining ELT data pipelines using modern tools such as dbt, Airflow, and Fivetran
Deep experience with cloud data warehouses such as Snowflake, BigQuery, or Redshift
Strong data modeling skills (e.g., dimensional modeling, star/snowflake schemas) to support both operational and analytical workloads
Proficient in SQL and at least one general-purpose programming language (e.g., Python, Java, or Scala)
Experience with streaming data platforms (e.g., Kafka, Kinesis, or equivalent) and real-time data processing patterns
Familiarity with infrastructure-as-code tools like Terraform and DevOps practices for managing data platform components
Hands-on experience with BI and semantic layer tools such as Looker, Mode, Tableau, or equivalent
What we offer:
Employer-paid health insurance
401k match with immediate vesting
Generous and flexible time off with 2 company-wide closure weeks
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.