CrawlJobs Logo
Briefcase Icon
Category Icon

Filters

×
Cities

Data Engineer - Pyspark United States, Alpharetta Jobs

4 Job Offers

Filters
Data Engineer
Save Icon
Join our team in Alpharetta as a Data Engineer to build and enhance a scalable Databricks data platform. You will develop reliable data pipelines using PySpark, Python, and Azure services. This role focuses on foundational development for long-term scalability, not production support. We offer a ...
Location Icon
Location
United States , Alpharetta
Salary Icon
Salary
Not provided
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Data Engineer
Save Icon
Seeking a Data Engineer in Alpharetta, GA, for a role requiring travel to unanticipated sites. This position demands a Master's degree and 2+ years of experience with Azure Cloud, Databricks, Apache Spark, SQL, R, and Power BI. Join Logic Loops LLC to build and manage advanced data solutions.
Location Icon
Location
United States , Alpharetta, GA
Salary Icon
Salary
Not provided
logic-loops.com Logo
Logic Loops
Expiration Date
Until further notice
Technology Services Engineer – Data Protection & Disaster Recovery
Save Icon
Seeking a Data Protection & Disaster Recovery Engineer in Alpharetta, GA. This full-time, on-site role requires deep Veeam expertise to design 3-2-1 strategies, ensure compliance, and manage backup/DR for MSP clients. Ideal candidates have 2+ years in an MSP with strong Windows Server and automat...
Location Icon
Location
United States , Alpharetta, Georgia
Salary Icon
Salary
Not provided
tier4group.com Logo
Tier4 Group
Expiration Date
Until further notice
Big Data Engineer
Save Icon
Join our Big Data Engineering team in Alpharetta, transforming raw data into actionable insights. You will work with Hadoop, Spark (Scala), and cloud platforms like AWS/Azure to build efficient ETL pipelines. This role requires a Master's in Computer Science or a related field and expertise in Li...
Location Icon
Location
United States , Alpharetta
Salary Icon
Salary
Not provided
nebulapartners.com Logo
Nebula Partners
Expiration Date
Until further notice
Are you a data architect with a passion for building robust, scalable systems? Your search for Data Engineer - PySpark jobs ends here. A Data Engineer specializing in PySpark is a pivotal role in the modern data ecosystem, responsible for constructing the foundational data infrastructure that powers analytics, machine learning, and business intelligence. These professionals are the master builders of the data world, transforming raw, unstructured data into clean, reliable, and accessible information for data scientists, analysts, and business stakeholders. If you are seeking jobs where you can work with cutting-edge big data technologies to solve complex data challenges at scale, this is your domain. In this profession, typical responsibilities revolve around the entire data pipeline lifecycle. Data Engineers design, develop, test, and maintain large-scale data processing systems. A core part of their daily work involves writing efficient, scalable code using PySpark, the Python library for Apache Spark, to perform complex ETL (Extract, Transform, Load) or ELT processes. They build and orchestrate data pipelines that ingest data from diverse sources—such as databases, APIs, and log files—into data warehouses like Snowflake or data lakes on cloud platforms like AWS, Azure, and GCP. Ensuring data quality and reliability is paramount; they implement robust data validation, monitoring, and observability frameworks to guarantee that data is accurate, timely, and trusted. Furthermore, they are tasked with optimizing the performance and cost of these data systems, fine-tuning Spark jobs for maximum efficiency, and automating deployment processes through CI/CD and Infrastructure as Code (IaC) practices. To excel in Data Engineer - PySpark jobs, a specific and powerful skill set is required. Mastery of Python and PySpark is non-negotiable, as it is the primary tool for distributed data processing. Profound knowledge of SQL is essential for data manipulation and querying. Experience with workflow orchestration tools like Apache Airflow is a common requirement to manage complex pipeline dependencies. A deep understanding of cloud data solutions (AWS, GCP, Azure) and platforms like Databricks is highly valued. Beyond technical prowess, successful candidates possess strong problem-solving abilities to debug and optimize data flows, a keen eye for system design and architecture, and excellent collaboration skills to work with cross-functional teams, including data scientists and business analysts. They are often expected to mentor junior engineers and contribute to establishing data engineering best practices and standards across an organization. If you are ready to build the future of data, explore the vast array of Data Engineer - PySpark jobs available and take the next step in your impactful career.

Filters

×
Category
Location
Work Mode
Salary