This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We build simple yet innovative consumer products and developer APIs that shape how everybody interacts with money and the financial system. The main goal of the DE team in 2024-25 is to build robust golden data sets to power our business goals of creating more insights based products. Making data-driven decisions is key to Plaid's culture. To support that, we need to scale our data systems while maintaining correct and complete data. We provide tooling and guidance to teams across engineering, product, and business and help them explore our data quickly and safely to get the data insights they need, which ultimately helps Plaid serve our customers more effectively. Data Engineers heavily leverage SQL and Python to build data workflows. We use tools like DBT, Airflow, Redshift, ElasticSearch, Atlanta, and Retool to orchestrate data pipelines and define workflows. We work with engineers, product managers, business intelligence, data analysts, and many other teams to build Plaid's data strategy and a data-first mindset. Our engineering culture is IC-driven -- we favor bottom-up ideation and empowerment of our incredibly talented team. We are looking for engineers who are motivated by creating impact for our consumers and customers, growing together as a team, shipping the MVP, and leaving things better than we found them. You will be in a high impact role that will directly enable business leaders to make faster and more informed business judgements based on the datasets you build. You will have the opportunity to carve out the ownership and scope of internal datasets and visualizations across Plaid which is a currently unowned area that we intend to take over and build SLAs on. You will have the opportunity to learn best practices and up-level your technical skills from our strong DE team and from the broader Data Platform team. You will collaborate with and have strong and cross functional partnerships with literally all teams at Plaid from Engineering to Product to Marketing/Finance etc.
Job Responsibility:
Understanding different aspects of the Plaid product and strategy to inform golden dataset choices, design and data usage principles
Have data quality and performance top of mind while designing datasets
Leading key data engineering projects that drive collaboration across the company
Advocating for adopting industry tools and practices at the right time
Owning core SQL and python data pipelines that power our data lake and data warehouse
Well-documented data with defined dataset quality, uptime, and usefulness
Requirements:
4+ years of dedicated data engineering experience, solving complex data pipelines issues at scale
Experience building data models and data pipelines on top of large datasets (in the order of 500TB to petabytes)
Value SQL as a flexible and extensible tool, and are comfortable with modern SQL data orchestration tools like DBT, Mode, and Airflow
Experience working with different performant warehouses and data lakes
Redshift, Snowflake, Databricks
Experience building and maintaining batch and realtime pipelines using technologies like Spark, Kafka
Appreciate the importance of schema design, and can evolve an analytics schema on top of unstructured data
Excited to try out new technologies and like to produce proof-of-concepts that balance technical advancement and user experience and adoption
Like to get deep in the weeds to manage, deploy, and improve low level data infrastructure
Empathetic working with stakeholders
listen to them, ask the right questions, and collaboratively come up with the best solutions for their needs while balancing infra and business needs
Champion for data privacy and integrity, and always act in the best interest of consumers
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.