CrawlJobs Logo
Briefcase Icon
Category Icon

Filters

×
Cities

Data Engineer - Pyspark United States, Philadelphia Jobs

4 Job Offers

Filters
New
Director Data Engineering - Bank Tech
Save Icon
Lead the data modernization for a major bank as a Director of Data Engineering. This role blends visionary technology leadership with hands-on management of high-caliber teams. You will define and build resilient, cloud-based enterprise data platforms using AWS and big data technologies. The posi...
Location Icon
Location
United States , Wilmington; McLean; Philadelphia
Salary Icon
Salary
244700.00 - 307200.00 USD / Year
capitalone.com Logo
Capital One
Expiration Date
Until further notice
Data Engineer
Save Icon
Join our team as a Data Engineer in Philadelphia, focusing on healthcare data solutions. You will design and implement data warehousing on Databricks/Azure, utilizing SQL, Python, and Spark. This contract-to-permanent role offers medical benefits and requires expertise in ETL, CI/CD, and cloud pl...
Location Icon
Location
United States , Philadelphia
Salary Icon
Salary
Not provided
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Principal Data Engineer
Save Icon
Lead the architecture of our enterprise data platform as a Principal Data Engineer. Design scalable cloud data pipelines on AWS/Azure, leveraging tools like Databricks and IDMC. This technical leadership role, based in Washington DC, Philadelphia, or Wilmington, offers comprehensive benefits incl...
Location Icon
Location
United States , Washington DC; Philadelphia PA; Wilmington DE
Salary Icon
Salary
113200.00 - 146664.00 USD / Year
amtrak.com Logo
AMTRAK
Expiration Date
Until further notice
Distinguished Data Engineer- Bank Tech
Save Icon
Join Capital One as a Distinguished Data Engineer in Bank Tech. This senior role requires 7+ years in data engineering, 3+ in architecture, and AWS expertise. You will set the technical vision and drive complex data platform modernization. Enjoy competitive benefits in locations like Wilmington, ...
Location Icon
Location
United States , Wilmington; Richmond; Philadelphia; McLean
Salary Icon
Salary
239900.00 - 301200.00 USD / Year
capitalone.com Logo
Capital One
Expiration Date
Until further notice
Are you a data architect with a passion for building robust, scalable systems? Your search for Data Engineer - PySpark jobs ends here. A Data Engineer specializing in PySpark is a pivotal role in the modern data ecosystem, responsible for constructing the foundational data infrastructure that powers analytics, machine learning, and business intelligence. These professionals are the master builders of the data world, transforming raw, unstructured data into clean, reliable, and accessible information for data scientists, analysts, and business stakeholders. If you are seeking jobs where you can work with cutting-edge big data technologies to solve complex data challenges at scale, this is your domain. In this profession, typical responsibilities revolve around the entire data pipeline lifecycle. Data Engineers design, develop, test, and maintain large-scale data processing systems. A core part of their daily work involves writing efficient, scalable code using PySpark, the Python library for Apache Spark, to perform complex ETL (Extract, Transform, Load) or ELT processes. They build and orchestrate data pipelines that ingest data from diverse sources—such as databases, APIs, and log files—into data warehouses like Snowflake or data lakes on cloud platforms like AWS, Azure, and GCP. Ensuring data quality and reliability is paramount; they implement robust data validation, monitoring, and observability frameworks to guarantee that data is accurate, timely, and trusted. Furthermore, they are tasked with optimizing the performance and cost of these data systems, fine-tuning Spark jobs for maximum efficiency, and automating deployment processes through CI/CD and Infrastructure as Code (IaC) practices. To excel in Data Engineer - PySpark jobs, a specific and powerful skill set is required. Mastery of Python and PySpark is non-negotiable, as it is the primary tool for distributed data processing. Profound knowledge of SQL is essential for data manipulation and querying. Experience with workflow orchestration tools like Apache Airflow is a common requirement to manage complex pipeline dependencies. A deep understanding of cloud data solutions (AWS, GCP, Azure) and platforms like Databricks is highly valued. Beyond technical prowess, successful candidates possess strong problem-solving abilities to debug and optimize data flows, a keen eye for system design and architecture, and excellent collaboration skills to work with cross-functional teams, including data scientists and business analysts. They are often expected to mentor junior engineers and contribute to establishing data engineering best practices and standards across an organization. If you are ready to build the future of data, explore the vast array of Data Engineer - PySpark jobs available and take the next step in your impactful career.

Filters

×
Category
Location
Work Mode
Salary