CrawlJobs Logo

Senior Data Engineer

https://www.cvshealth.com/ Logo

CVS Health

Location Icon

Location:
United States

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

101970.00 - 222480.00 USD / Year

Job Description:

At CVS Health, we’re building a world of health around every consumer and surrounding ourselves with dedicated colleagues who are passionate about transforming health care. As the nation’s leading health solutions company, we reach millions of Americans through our local presence, digital channels and more than 300,000 purpose-driven colleagues – caring for people where, when and how they choose in a way that is uniquely more connected, more convenient and more compassionate. And we do it all with heart, each and every day.

Job Responsibility:

  • Architect and develop robust, scalable ETL/ELT pipelines using Cloud Dataflow, Cloud composer (Airflow), and Pub/Sub for both batch and streaming use cases
  • Leverage BigQuery as the central data warehouse and design integrations with other GCP services (e.g., Cloud storage, Cloud functions)
  • Build and optimize analytical data models in BigQuery
  • Implement partitioning, clustering, and materialized views for performance and cost efficiency
  • Ensure compliance with data governance, access controls, and IAM best practices
  • Develop integrations with external systems (APIs, flat files etc.) using GCP-native or hybrid approaches
  • Utiilize tools like Dataflow or custom Python/Java services on Cloud Functions or Cloud Run to handle transformations and ingestion logic
  • Build automated CI/CD pipeline using Cloud Build, GitHub Actions, or Jenkins for deploying data pipeline code and workflows
  • Set up observability using Cloud Monitoring, Cloud Logging, and Error Reporting to ensure pipeline reliability
  • Lead architectural decisions for data platforms and mentor junior engineers on cloud-native data engineering patterns
  • Promote best practices for code quality, version control, cost optimization, and data security in a GCP environment
  • Drive initiatives around data democratization, including building reusable datasets and data catalogs via Datapelx or Data Catalog

Requirements:

  • 3+ years of experience with SQL, NoSQL
  • 3+ years of experience with Python (or a comparable scripting language)
  • Work on developing and maintaining applications built on the Pega platform
  • Understand the requirements and design of the applications
  • Write code in Java and JavaScript programming languages
  • 3+ years of experience with Data warehouses (such as data modeling and technical architectures) and infrastructure components
  • 3+ years of experience with ETL/ELT, and building high-volume data pipelines
  • 3+ years of experience with reporting/analytic tools
  • 3+ years of experience with Query optimization, data structures, transformation, metadata, dependency, and workload management
  • 3+ years of experience with Big data and cloud architecture
  • 3+ years of hands-on experience building modern data pipelines within a major cloud platform (GCP, AWS, Azure)
  • 3+ years of experience with deployment/scaling of apps on containerized environment (i.e. Kubernetes, AKS)
  • 3+ years of experience with real-time and streaming technology (i.e. Azure Event Hubs, Azure Functions, Kafka, Spark Streaming)
  • 1+ year(s) of soliciting complex requirements and managing relationships with key stakeholders
  • 1+ year(s) of experience independently managing deliverables

Nice to have:

  • Experience in designing and building data engineering solutions in cloud environments (preferably GCP)
  • Experience with Git, CI/CD pipeline, and other DevOps principles/best practices
  • Experience with bash shell scripts, UNIX utilities & UNIX Commands
  • Ability to leverage multiple tools and programming languages to analyze and manipulate data sets from disparate data sources
  • Knowledge of API development
  • Experience with complex systems and solving challenging analytical problems
  • Strong collaboration and communication skills within and across teams
  • Knowledge of data visualization and reporting
  • Experience with schema design and dimensional data modeling
  • Google Professional Data Engineer Certification
  • Knowledge of microservices and SOA
  • Formal SAFe and/or agile experience
  • Previous healthcare experience and domain knowledge
  • Experience designing, building, and maintaining data processing systems
  • Experience architecting and building data warehouse and data lakes
What we offer:
  • Affordable medical plan options
  • 401(k) plan (including matching company contributions)
  • Employee stock purchase plan
  • No-cost programs for all colleagues including wellness screenings, tobacco cessation and weight management programs, confidential counseling and financial coaching
  • Paid time off
  • Flexible work schedules
  • Family leave
  • Dependent care resources
  • Colleague assistance programs
  • Tuition assistance
  • Retiree medical access

Additional Information:

Job Posted:
July 23, 2025

Expiration:
September 29, 2025

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.