This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
This role is critical to building and maintaining robust data pipelines that power reporting and analytics for risk adjustment and provider performance projects. You'll work closely with analysts, reporting leads, and business stakeholders to transform raw data into trusted, scalable datasets that support market provider performance office and executive-level dashboards.
Job Responsibility:
Design, build, and maintain ETL pipelines using PySpark and AWS Glue to process large volumes of structured and semi-structured data
Collaborate with reporting and insights teams to gather data requirements and ensure accurate, timely, and complete data delivery
Develop advanced SQL queries, including complex joins and Common Table Expressions (CTEs) for transformation, aggregation, and analysis
Use Git for version control, code management, and team collaboration
Design and manage scalable Datalake and warehouses on AWS using services such as S3, Redshift, Glue Catalog, and Athena
Build and optimize data solutions on DynamoDB and other NoSQL/relational databases as needed
Create interactive dashboards and visualizations in Amazon QuickSight to support data-driven decision-making
Automate workflows using AWS Step Functions, EventBridge, and Lambda to support efficient data operations
Monitor and optimize data processing performance, ensuring reliability and scalability
Troubleshoot and resolve data quality or processing issues across source systems and downstream reporting
Keep up with industry best practices in cloud data engineering, security, and DevOps automation
Requirements:
Hands-on experience with PySpark and/or Python
Advanced SQL skills
Experience with version control using Git or similar tools
Management consulting, business process consulting and/or strategic business planning
Enterprise-wide and/or cross-functional large-scale initiatives with high degree of complexity
Demonstrated experience successfully implementing change in regulated and highly complex organizations
Bachelor's degree in Business Administration/Management, Finance, Economics, Statistics, Mathematics, Data Science, HIM, Information Systems, Computer Science, or other relevant degree or 5 years of equivalent work experience
Nice to have:
2+ years Government Programs experience including Medicare, Commercial ACA, and/or Medicaid
Risk adjustment working knowledge and understanding
Understanding of Databricks and its data engineering concepts
Strong understanding of on-premises data warehousing concepts, and AWS data lake architecture
Exposure to big data technologies
Experience building and supporting reporting solutions in healthcare, especially related to risk adjustment or provider attribution
Knowledge of DevOps or CI/CD practices for data pipeline development
Demonstrated ability to work effectively in cross-functional teams and manage multiple priorities in a fast-paced environment
What we offer:
Affordable medical plan options
401(k) plan including matching company contributions
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.