This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
At CVS Health, we’re building a world of health around every consumer and surrounding ourselves with dedicated colleagues who are passionate about transforming health care. As the nation’s leading health solutions company, we reach millions of Americans through our local presence, digital channels and more than 300,000 purpose-driven colleagues – caring for people where, when and how they choose in a way that is uniquely more connected, more convenient and more compassionate. And we do it all with heart, each and every day.
Job Responsibility:
develop, build and manage large-scale data structures, pipelines and efficient Extract/Load/Transform (ETL) workflows to address complex problems and support business applications
develop large scale data structures and pipelines to organize, collect and standardize data to generate insights and addresses reporting needs
write ETL (Extract/Transform/Load) processes, design database systems, and develop tools for real-time and offline analytic processing that improve existing systems and expand capabilities
collaborate with Data Science team to transform data and integrate algorithms and models into automated processes
test and maintain systems and troubleshoot malfunctions
leverage knowledge of Hadoop architecture, HDFS commands, and designing and optimizing queries to build data pipelines
utilize programming skills in Python, Java, or similar languages to build robust data pipelines and dynamic systems
build data marts and data models to support Data Science and other internal customers
integrate data from a variety of sources and ensure adherence to data quality and accessibility standards
analyze current information technology environments to identify and assess critical capabilities and recommend solutions to complex business problems
experiment with available tools and advise on new tools to provide optimal solutions that meet the requirements dictated by the model/use case
and mentor junior Data Engineers
Requirements:
Bachelor's degree (or foreign equivalent) in Computer Science, Data Science, Statistics, Mathematics, Analytics, or a related field and five (5) years of progressive, post-baccalaureate experience in the job offered or related occupation
Requires five (5) years of experience in each of the following: Developing data analytics solutions
Cl/CD tools: Jenkins
Python
Agile Methodologies
Utilizing Teradata database
Managing the Software development lifecycle (SDLC)
Writing Extract/Transform/Load (ETL) processes
Developing backend services, performing code reviews, and collaborating with peers on software development solutions
and data warehousing and Big Data implementation
What we offer:
full range of medical, dental, and vision benefits
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.