This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
From Fivetran’s founding until now, our mission has remained the same: to make access to data as simple and reliable as electricity. With Fivetran, customer data arrives in their warehouses, canonical and ready to query, with no engineering or maintenance required. We’re proud that more organizations continue to leverage our technology every day to become truly data-driven. About the Role We’re looking for a Senior Software Engineer to build and evolve core capabilities for Fivetran’s Datalakes platform. In this role, you will work on systems that enable reliable, scalable, and high-performance movement of data into datalake ecosystems, while helping ensure correctness, operability, and long-term maintainability. The work is diverse and hands-on, ranging from building new platform capabilities and improving existing services to solving reliability, scale, and performance challenges in distributed systems. You will be responsible for driving technical excellence for your team and owned services by contributing code, reviewing designs and implementations, shaping architectural decisions, and mentoring junior engineers. As a Senior Software Engineer on the Datalakes team, you will help build and strengthen the platform foundations needed to support datalake and catalog-oriented workflows. The ideal candidate is technically strong, pragmatic, collaborative, and motivated by solving complex engineering problems at scale. This is a full-time position based out of our Bangalore office. Our hybrid work model offers a blend of remote flexibility and in-person collaboration, including two days in the office each week to connect and build as a team.
Job Responsibility:
Build and evolve backend services and platform capabilities for Fivetran’s Datalakes ecosystem
Design, develop, and maintain reliable distributed systems that move and process large volumes of data at scale
Work on core data pipeline capabilities that enable clean, incremental, and automated data movement for customers
Build foundational services and engineering patterns for the Datalakes team, with a strong focus on scalability, correctness, reliability, and operability
Review requirements, technical designs, and implementation plans to provide meaningful engineering feedback
Improve system performance, resilience, and debuggability across services and workflows
Contribute to architectural design and make thoughtful technical decisions that balance speed, quality, and maintainability
Collaborate closely with cross-functional teams to deliver robust and customer-facing platform capabilities
Use modern engineering tools, including basic AI-assisted tooling where appropriate, to improve development productivity and debugging workflows
Requirements:
Strong technical and problem-solving skills, with recent hands-on software development experience
Experience building reliable distributed systems, with an emphasis on high-volume data processing within enterprise and/or web-scale products operating under strict SLAs
Broad technical knowledge across software development, system design, and automation
Strong understanding of algorithms, data structures, and software design fundamentals
Expertise in Java, cloud platforms such as AWS or GCP, cloud-based APIs, databases, and programming best practices
Experience building, operating, and debugging backend services in production environments
Strong ownership mindset, communication skills, and technical leadership abilities
Ability to create and contribute to an environment geared toward innovation, high productivity, high quality, and strong customer focus
Basic familiarity with AI-assisted developer tools and productivity workflows
Ability to use AI tools effectively for tasks such as debugging, documentation, code exploration, or log analysis with appropriate validation of outputs
Good judgment in verifying generated results and maintaining quality, reliability, and safety in engineering workflows
Interest in using modern tooling to improve engineering productivity and effectiveness
Exposure to datalake technologies and concepts such as object storage, file-based table formats, metadata management, and large-scale data processing workflows
Familiarity with concepts related to catalogs, schema evolution, partitioning, and data consistency is a plus
Understanding of common datalake reliability challenges such as duplicate processing, late-arriving data, schema drift, and incremental data handling is desirable
Experience or exposure to building or supporting integrations with datalake and catalog ecosystems is a strong plus
Nice to have:
Familiarity with concepts related to catalogs, schema evolution, partitioning, and data consistency is a plus
Experience or exposure to building or supporting integrations with datalake and catalog ecosystems is a strong plus
What we offer:
100% employer-paid medical insurance
Generous paid time-off policy (PTO), plus paid sick time, inclusive parental leave policy, holidays, and volunteer days off
RSU stock grants
Professional development and training opportunities
Company virtual happy hours, free food, and fun team-building activities
Monthly cell phone stipend
Access to an innovative mental health support platform that offers personalized care and resources in areas such as: therapy, coaching, and self-guided mindfulness exercises for all covered employees and their covered dependents