CrawlJobs Logo

Senior Principal Data Engineer

https://www.atlassian.com Logo

Atlassian

Location Icon

Location:
United States, San Francisco

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

Not provided

Job Description:

Atlassian is looking for a Senior Principal Data Engineer to join the Go-To Market Data Engineering (GTM-DE) team to build data lakes, maintain big data pipelines/services, and facilitate the movement of billions of messages daily. Work directly with business stakeholders and platform/engineering teams to enable growth and retention strategies. Use technical expertise to manage and orchestrate multi-petabyte scale data lakes and build scalable, efficient data pipelines and services.

Job Responsibility:

  • Help stakeholder teams ingest data faster into our data lake
  • find ways to make data pipelines more efficient
  • come up with ideas to help instigate self-serve data engineering within the company
  • building micro-services, architecting, designing, and enabling self-serve capabilities at scale to help Atlassian grow

Requirements:

  • 18+ years of experience in a Data Engineer role as an individual contributor
  • at least 7 years of experience as a tech lead for Data Engineering teams, and delivered complex, cross-team initiatives
  • built durable relationships with executives/senior leaders across Sales, Marketing, Finance, Commerce and related organizations, and understand complexities of data in these organizations
  • a track record of driving and delivering large complex, multi-team efforts
  • a great communicator and maintain many of the essential cross-team and cross-functional relationships necessary for the team's success
  • experience with building streaming pipelines with a micro-services architecture for low-latency analytics
  • experience working with varied forms of data infrastructure, including relational databases (e.g. SQL), Spark, dbt, and column stores (e.g. Redshift)
  • experience building scalable data pipelines using Spark using Airflow scheduler/executor framework or similar scheduling tools
  • experience working in a technical environment with the latest technologies like AWS data services (Redshift, Athena, EMR) or similar Apache projects (Spark, Flink, Hive, or Kafka)
  • understanding of Data Engineering tools/frameworks and standards to improve the productivity and quality of output for Data Engineers across the team
  • industry experience working with large-scale, high-performance data processing systems (batch and streaming) with a 'Streaming First' mindset to drive Atlassian's business growth and improve the product experience

Nice to have:

  • built and designed Kappa or Lambda architecture data platforms and services
  • experience implementing Master Data Management (MDM), Customer Relationship Management (CRM) solutions
  • built pipelines using Databricks and well-versed with their APIs
  • contributed to open source projects
  • well-versed in Python, & Spark and scaling data pipelines
What we offer:
  • health coverage
  • paid volunteer days
  • wellness resources

Additional Information:

Job Posted:
April 30, 2025

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.