CrawlJobs Logo

Senior Data Engineer

relatient.com Logo

Relatient

Location Icon

Location:
India , Pune

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

At Relatient, we’re on a mission to simplify access to care – intelligently. As the leader in intelligent scheduling and patient engagement, we help healthcare organizations connect with patients more effectively through AI-powered workflows, real-time automation, and flexible access tools. More than 47,000 providers trust us to manage over150 million appointments annually, reducing delays in care, streamlining contact center operations, and helping patients get the care they need, when they need it. We’ve been recognized by Forbes, Deloitte, and Inc. 5000 for our growth, innovation, and inclusive culture. Your Role at Relatient This role is responsible for architecting, building, and optimizing the data ecosystem across Relatient, partnering closely with Product Management and Engineering teams to ensure high-quality, reliable data. The position leads the design and implementation of modern data warehouse architectures, establishes data modeling standards, and develops scalable ETL/ELT pipelines that integrate data from diverse sources. The engineer will own the end-to-end performance, reliability, and scalability of data systems—optimizing SQL, tuning schemas, overseeing data quality and lineage, and implementing robust security, backup, and disaster-recovery strategies.

Job Responsibility:

  • Architect, design, and implement robust end-to-end data warehouse (DW) solutions using modern technologies (e.g. Postgres or on-prem solutions)
  • Define data modeling standards (dimensional and normalized) and build ETL/ELT pipelines for efficient data flow and transformation
  • Integrate data from multiple sources (ERP, CRM. APIs, flat files, real-time streams)
  • Develop and maintain scalable and reliable data ingestion, transformation, and storage pipelines
  • Ensure data quality, consistency, and lineage across all data systems
  • Analyst and tune SQL queries, schemas, indexes, and ETL process to maximize database and warehouse performance
  • Monitor data systems and optimize storage costs and query response times
  • Implement high availability, backup, disaster recovery, and data security strategies
  • Collaborate with DevOps and Infrastructure teams to ensure optimal deployment, scaling, and performance of DW environments
  • Work closely with Data Scientists, Analysts, and Business Teams to translate business needs into technical data solutions
  • Provide strategic recommendations on data architecture, technology adoption, and process improvement
  • Identify bottlenecks in current data architecture and propose innovative solutions to improve efficiency, scalability, and maintainability
  • Stay up to date with emerging data technologies, cloud solutions, and best practices in data engineering
  • Performs other duties as assigned

Requirements:

  • Bachelor's degree, B.E./ B. Tech, computer engineering, or equivalent work experience in lieu of a degree is required, Master’s degree preferred
  • 7+ years of experience in database engineering, data warehousing, or data architecture
  • Proven expertise with at least one major data warehouse platform (e.g. Postgres, Snowflake, Redshift, BigQuery)
  • Strong SQL and ETL/ELT development skills
  • Deep understanding of data modeling
  • Experience with cloud data ecosystems (AWS)
  • Hands-on experience with orchestration tools and version control (Git)
  • Experience in data governance, security, and compliance best practices
  • Experience building/generating analytical reports using Power BI
What we offer:
  • INR 5,00,000/- of life insurance coverage for all full-time employees and their immediate family
  • INR 15,00,000/- of group accident insurance
  • Education reimbursement
  • 10 national and state holidays, plus 1 floating holiday
  • Flexible working hours and a hybrid policy

Additional Information:

Job Posted:
December 11, 2025

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Data Engineer

Senior Data Engineer

As a Senior Software Engineer, you will play a key role in designing and buildin...
Location
Location
United States
Salary
Salary:
156000.00 - 195000.00 USD / Year
apollo.io Logo
Apollo.io
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years experience in platform engineering, data engineering or in a data facing role
  • Experience in building data applications
  • Deep knowledge of data eco system with an ability to collaborate cross-functionally
  • Bachelor's degree in a quantitative field (Physical / Computer Science, Engineering or Mathematics / Statistics)
  • Excellent communication skills
  • Self-motivated and self-directed
  • Inquisitive, able to ask questions and dig deeper
  • Organized, diligent, and great attention to detail
  • Acts with the utmost integrity
  • Genuinely curious and open
Job Responsibility
Job Responsibility
  • Architect and build robust, scalable data pipelines (batch and streaming) to support a variety of internal and external use cases
  • Develop and maintain high-performance APIs using FastAPI to expose data services and automate data workflows
  • Design and manage cloud-based data infrastructure, optimizing for cost, performance, and reliability
  • Collaborate closely with software engineers, data scientists, analysts, and product teams to translate requirements into engineering solutions
  • Monitor and ensure the health, quality, and reliability of data flows and platform services
  • Implement observability and alerting for data services and APIs (think logs, metrics, dashboards)
  • Continuously evaluate and integrate new tools and technologies to improve platform capabilities
  • Contribute to architectural discussions, code reviews, and cross-functional projects
  • Document your work, champion best practices, and help level up the team through knowledge sharing
What we offer
What we offer
  • Equity
  • Company bonus or sales commissions/bonuses
  • 401(k) plan
  • At least 10 paid holidays per year
  • Flex PTO
  • Parental leave
  • Employee assistance program and wellbeing benefits
  • Global travel coverage
  • Life/AD&D/STD/LTD insurance
  • FSA/HSA and medical, dental, and vision benefits
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We’re hiring a Senior Data Engineer with strong experience in AWS and Databricks...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
appen.com Logo
Appen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5-7 years of hands-on experience with AWS data engineering technologies, such as Amazon Redshift, AWS Glue, AWS Data Pipeline, Amazon Kinesis, Amazon RDS, and Apache Airflow
  • Hands-on experience working with Databricks, including Delta Lake, Apache Spark (Python or Scala), and Unity Catalog
  • Demonstrated proficiency in SQL and NoSQL databases, ETL tools, and data pipeline workflows
  • Experience with Python, and/or Java
  • Deep understanding of data structures, data modeling, and software architecture
  • Strong problem-solving skills and attention to detail
  • Self-motivated and able to work independently, with excellent organizational and multitasking skills
  • Exceptional communication skills, with the ability to explain complex data concepts to non-technical stakeholders
  • Bachelor's Degree in Computer Science, Information Systems, or a related field. A Master's Degree is preferred.
Job Responsibility
Job Responsibility
  • Design, build, and manage large-scale data infrastructures using a variety of AWS technologies such as Amazon Redshift, AWS Glue, Amazon Athena, AWS Data Pipeline, Amazon Kinesis, Amazon EMR, and Amazon RDS
  • Design, develop, and maintain scalable data pipelines and architectures on Databricks using tools such as Delta Lake, Unity Catalog, and Apache Spark (Python or Scala), or similar technologies
  • Integrate Databricks with cloud platforms like AWS to ensure smooth and secure data flow across systems
  • Build and automate CI/CD pipelines for deploying, testing, and monitoring Databricks workflows and data jobs
  • Continuously optimize data workflows for performance, reliability, and security, applying Databricks best practices around data governance and quality
  • Ensure the performance, availability, and security of datasets across the organization, utilizing AWS’s robust suite of tools for data management
  • Collaborate with data scientists, software engineers, product managers, and other key stakeholders to develop data-driven solutions and models
  • Translate complex functional and technical requirements into detailed design proposals and implement them
  • Mentor junior and mid-level data engineers, fostering a culture of continuous learning and improvement within the team
  • Identify, troubleshoot, and resolve complex data-related issues
  • Fulltime
Read More
Arrow Right

Senior Manager, Data Engineering

You will build a team of talented engineers that will work cross functionally to...
Location
Location
United States , San Jose
Salary
Salary:
240840.00 - 307600.00 USD / Year
archer.com Logo
Archer Aviation
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in a similar role, 2 of which are in a data leadership role
  • B.S. in a quantitative discipline such as Computer Science, Computer Engineering, Electrical Engineering, Mathematics, or a related field
  • Expertise with data engineering disciplines including data warehousing, database management, ETL processes, and ML model deployment
  • Experience with processing and storing telemetry data
  • Demonstrated experience with data governance standards and practices
  • 3+ years leading teams, including building and recruiting data engineering teams supporting diverse stakeholders
  • Experience with cloud-based data platforms such as AWS, GCP, or Azure
Job Responsibility
Job Responsibility
  • Lead and continue to build a world-class team of engineers by providing technical guidance and mentorship
  • Design and implement scalable data infrastructure to ingest, process, store, and access multiple data supporting flight test, manufacturing and supply chain, and airline operations
  • Take ownership of data infrastructure to enable a highly scalable and cost-effective solution serving the needs of various business units
  • Build and support the development of novel tools to enable insight and decision making with teams across the organization
  • Evolve data engineering and AI strategy to align with the short and long term priorities of the organization
  • Help to establish a strong culture of data that is used throughout the company and industry
  • Lead initiatives to integrate AI capabilities in new and existing tools
  • Fulltime
Read More
Arrow Right

Data engineer senior

Within a dynamic, high-level team, you will contribute to both R&D and client pr...
Location
Location
France , Paris
Salary
Salary:
Not provided
artelys.com Logo
Artelys
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Degree from a top engineering school or a high-level university program
  • At least 3 years of experience in designing and developing data-driven solutions with high business impact, particularly in industrial or large-scale environments
  • Excellent command of Python for both application development and data processing, with strong expertise in libraries such as Pandas, Polars, NumPy, and the broader Python Data ecosystem
  • Experience implementing data processing pipelines using tools like Apache Airflow, Databricks, Dask, or flow orchestrators integrated into production environments
  • Contributed to large-scale projects combining data analysis, workflow orchestration, back-end development (REST APIs and/or Messaging), and industrialisation, within a DevOps/DevSecOps-oriented framework
  • Proficient in using Docker for processing encapsulation and deployment
  • Experience with Kubernetes for orchestrating workloads in cloud-native architectures
  • Motivated by practical applications of data in socially valuable sectors such as energy, mobility, or health, and thrives in environments where autonomy, rigour, curiosity, and teamwork are valued
  • Fluency in English and French is required
Job Responsibility
Job Responsibility
  • Design and develop innovative and high-performance software solutions addressing industrial challenges, primarily using the Python language and a microservices architecture
  • Gather user and business needs to design data collection and storage solutions best suited to the presented use cases
  • Develop technical solutions for data collection, cleaning, and processing, then industrialise and automate them
  • Contribute to setting up technical architectures based on Data or even Big Data environments
  • Carry out development work aimed at industrialising and orchestrating computations (statistical and optimisation models) and participate in software testing and qualification
What we offer
What we offer
  • Up to 2 days of remote work per week possible
  • Flexible working hours
  • Offices located in the city center of each city where we are located
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We're seeking an experienced Senior Data Engineer to help shape the future of he...
Location
Location
Germany , Berlin
Salary
Salary:
Not provided
audibene.de Logo
Audibene GmbH
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of hands on experience with complex ETL processes, data modeling, and large scale data systems
  • Production experience with modern cloud data warehouses (Snowflake, BigQuery, Redshift) on AWS, GCP, or Azure
  • Proficiency in building and optimizing data transformations and pipelines in python
  • Experience with columnar storage, MPP databases, and distributed data processing architectures
  • Ability to translate complex technical concepts for diverse audiences, from engineers to business stakeholders
  • Experience with semantic layers, data catalogs, or metadata management systems
  • Familiarity with modern analytical databases like Snowflake, BigQuery, ClickHouse, DuckDB, or similar systems
  • Experience with streaming technologies like Kafka, Pulsar, Redpanda, or Kinesis
Job Responsibility
Job Responsibility
  • Design and build robust, high performance data pipelines using our modern stack (Airflow, Snowflake, Pulsar, Kubernetes) that feed directly into our semantic layer and data catalog
  • Create data products optimized for consumption by AI agents and LLMs where data quality, context, and semantic richness are crucial
  • Structure and transform data to be inherently machine readable, with rich metadata and clear lineage that powers intelligent applications
  • Take responsibility from raw data ingestion through to semantic modeling, ensuring data is not just accurate but contextually rich and agent ready
  • Champion best practices in building LLM consumable data products, optimize for both human and machine consumers, and help evolve our dbt transformation layer
  • Built data products for AI/LLM consumption, not just analytics dashboards
What we offer
What we offer
  • Work 4 days a week from our office (Berlin/Mainz) with a passionate team, and 1 day a week from home
  • Regularly join on- and offline team events, company off-sites, and the annual audibene Wandertag
  • Cost of the Deutschland-Ticket covered
  • Access to over 50,000 gyms and wellness facilities through Urban Sports Club
  • Support for personal development with a wide range of programs, trainings, and coaching opportunities
  • Dog-friendly office
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Fospha is dedicated to building the world's most powerful measurement solution f...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
blenheimchalcot.com Logo
Blenheim Chalcot
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Excellent knowledge of PostgreSQL and SQL technologies
  • Fluent in Python
  • Understanding of data architecture, pipelines and ELT flows/ technology/ methodologies
  • Understanding of agile methodologies and practices
  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field
Job Responsibility
Job Responsibility
  • Implement and maintain ELT (Extract, Load, Transform) processes using scalable data pipelines and data architecture
  • Collaborate with cross-functional teams to understand data requirements and deliver effective solutions
  • Ensure data integrity and quality across various data sources
  • Support data-driven decision-making by providing clean, reliable, and timely data
  • Define the standards for high-quality data for Data Science and Analytics use-cases and help shape the data roadmap for the domain
  • Design, develop, and maintain the data models used by ML Engineers, Data Analysts and Data Scientists to access data
  • Conduct exploratory data analysis to uncover data patterns and trends
  • Identify opportunities for process improvement and drive continuous improvement in data operations
  • Stay updated on industry trends, technologies, and best practices in data engineering
What we offer
What we offer
  • Competitive salary
  • Be part of a leading global venture builder, Blenheim Chalcot, and learn from the incredible talent in BC
  • Be exposed to the right mix of challenges and learning and development opportunities
  • Flexible Benefits including Private Medical and Dental, Gym Subsidiaries, Life Assurance, Pension scheme etc.
  • 25 days of paid holiday + your birthday off
  • Free snacks in the office
  • Quarterly team socials
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Come work on fantastically high-scale systems with us! Blis is an award-winning,...
Location
Location
United Kingdom , Edinburgh
Salary
Salary:
Not provided
blis.com Logo
Blis
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years direct experience delivering robust performant data pipelines within the constraints of direct SLA’s and commercial financial footprints
  • Proven experience in architecting, developing, and maintaining Apache Druid and Imply platforms, with a focus on DevOps practices and large-scale system re-architecture
  • Mastery of building Pipelines in GCP maximising the use of native and native supporting technologies e.g. Apache Airflow
  • Mastery of Python for data and computational tasks with fluency in data cleansing, validation and composition techniques
  • Hands-on implementation and architectural familiarity with all forms of data sourcing i.e streaming data, relational and non-relational databases, and distributed processing technologies (e.g. Spark)
  • Fluency with all appropriate python libraries typical of data science e.g. pandas, scikit-learn, scipy, numpy, MLlib and/or other machine learning and statistical libraries
  • Advanced knowledge of cloud based services specifically GCP
  • Excellent working understanding of server-side Linux
  • Professional in managing and updating on tasks ensuring appropriate levels of documentation, testing and assurance around their solutions
Job Responsibility
Job Responsibility
  • Design, build, monitor, and support large scale data processing pipelines
  • Support, mentor, and pair with other members of the team to advance our team’s capabilities and capacity
  • Help Blis explore and exploit new data streams to innovative and support commercial and technical growth
  • Work closely with Product and be comfortable with taking, making and delivering against fast paced decisions to delight our customers
Read More
Arrow Right

Senior Data Engineer

Come work on fantastically high-scale systems with us! Blis is an award-winning,...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
blis.com Logo
Blis
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years direct experience delivering robust performant data pipelines within the constraints of direct SLA’s and commercial financial footprints
  • Proven experience in architecting, developing, and maintaining Apache Druid and Imply platforms, with a focus on DevOps practices and large-scale system re-architecture
  • Mastery of building Pipelines in GCP maximising the use of native and native supporting technologies e.g. Apache Airflow
  • Mastery of Python for data and computational tasks with fluency in data cleansing, validation and composition techniques
  • Hands-on implementation and architectural familiarity with all forms of data sourcing i.e streaming data, relational and non-relational databases, and distributed processing technologies (e.g. Spark)
  • Fluency with all appropriate python libraries typical of data science e.g. pandas, scikit-learn, scipy, numpy, MLlib and/or other machine learning and statistical libraries
  • Advanced knowledge of cloud based services specifically GCP
  • Excellent working understanding of server-side Linux
  • Professional in managing and updating on tasks ensuring appropriate levels of documentation, testing and assurance around their solutions
Job Responsibility
Job Responsibility
  • Design, build, monitor, and support large scale data processing pipelines
  • Support, mentor, and pair with other members of the team to advance our team’s capabilities and capacity
  • Help Blis explore and exploit new data streams to innovative and support commercial and technical growth
  • Work closely with Product and be comfortable with taking, making and delivering against fast paced decisions to delight our customers
  • Fulltime
Read More
Arrow Right