This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Senior Data Engineer plays a critical role in delivering strategic and operational data solutions by designing and optimizing scalable cloud-based data architectures. This role focuses on supporting complex client and prospect needs—particularly in healthcare and TPA environments—through proactive problem-solving and cross-functional collaboration. You will develop and maintain robust data pipelines, models, and systems across cloud platforms, ensuring performance, reliability, and compliance. You’ll work closely with account management, ETL, data warehouse, business intelligence, and reporting teams to deliver high-impact data solutions. As a subject matter expert, you’ll mentor Data Engineers across the platform, guiding them in data modeling, troubleshooting, and best practices. You’ll also contribute to organizational efficiency by investigating new technologies and proposing enhancements to the tech stack
Job Responsibility:
Build data applications and processes using Python, SQL, and Django
manage and query data in PostgreSQL, Oracle, and cloud-native databases
Examine, extract, cleanse, and load data while implementing quality assurance rules and tools to ensure consistent and accurate data
Work with healthcare-specific data processes such as EDI file transfers, claims adjudication, audits, eligibility verification, and reporting workflows
Collaborate with cross-functional teams (Data Analysts, Data Scientists, Product, Reporting, Account Management) to define requirements and deliver data-driven solutions
Ensure data quality, integrity, and security through automated validation, auditing, and monitoring, with compliance to HIPAA and CMS regulations
Monitor, maintain, and tune pipeline performance
proactively troubleshoot and resolve complex data flow and system issues
Provide technical mentorship to Data Engineers, sharing expertise in data modeling, pipeline development, and troubleshooting practices
Research and propose improvements to the tech stack and data engineering processes
Participate in sprint planning, refinement, and estimation to support implementation awareness and delivery
Requirements:
At least one AWS certification (e.g., AWS Certified Data Analytics – Specialty, Big Data – Specialty, Developer – Associate)
7+ years in data engineering or analytics engineering, with a strong focus on cloud-native architectures. Proven experience designing and operating scalable data platforms in AWS
5+ years in healthcare, insurance, or claims processing, including 5+ years working with EDI (834, 835, 837, 2222, 2223, 999), X12 file standards or HL7 standards and familiarity with HIPAA and CMS compliance
Expert-level proficiency in SQL (including pivots, window functions, and complex date calculations) and Python for data processing, transformation, and application development
Hands-on experience with orchestration tools like Airflow, containerization with Docker, and CI/CD pipelines. Strong bias for automation and continuous improvement
Proficient in consuming and transforming REST APIs and JSON data into relational models. Skilled in building robust data ingestion and transformation pipelines
Experience with JIRA, BitBucket Git, BitBucket Pipelines, and collaboration with cross-functional teams including Data Analysts, Data Scientists, Product, and Account Management
Proficient in Excel and BI tools such as Tableau, Power BI, and MicroStrategy for data analysis and reporting
Detail-oriented with a strong focus on data quality, accuracy, and performance tuning for large-scale data systems. Background in cost optimization and system reliability
Ability to mentor engineers, share technical knowledge, and communicate effectively with both technical and non-technical stakeholders. Strong documentation and systems thinking
Nice to have:
Design, build, and maintain scalable ETL/ELT pipelines using AWS, Airflow, and other technologies to ingest and transform healthcare and TPA data, including claims, provider, and eligibility sources
Develop and operate resilient data architectures and workflows (Airflow, CloudWatch, ECS, DAGs) with strong CI/CD, observability, and governance
Deep proficiency in AWS services including S3, Glue, EMR, EC2, MWAA, Lambda, Kinesis, ECS, and experience with Infrastructure as Code tools like Terraform
Deep understanding of relational and non-relational data models, including star/snowflake schemas and dimensional modeling. Skilled in PostgreSQL, Oracle, AWS RDS, Snowflake, and Redshift. Ability to mentor others in data modeling best practices
What we offer:
Competitive base salary and benefits effective day one
Comprehensive medical and dental through our own health solutions (yes, we use what we build)
Unlimited PTO—rest and recharge time is non-negotiable
Mental health support, retirement planning, and financial protection
Professional development with clear career progression and learning budgets
Mission-driven culture where diverse perspectives drive real impact on people's health