CrawlJobs Logo
Briefcase Icon
Category Icon

Data Engineer - Pyspark Poland, Wroclaw Jobs

10 Job Offers

Filters
Snowflake Data Engineer
Save Icon
Join our team as a Snowflake Data Engineer for a key migration project in the Healthcare Commercial sector. You will design and build ETL/ELT processes using dbt Cloud within Snowflake infrastructure. This agile role, based in major Polish cities, requires expertise in Snowflake, dbt, and Conflue...
Location Icon
Location
Poland , Wroclaw; Bialystok; Cracow; Gdansk; Lodz; Szczecin; Warsaw
Salary Icon
Salary
100.00 - 160.00 PLN / Hour
spyro-soft.com Logo
Spyrosoft
Expiration Date
Until further notice
Streaming Data Platform Engineer
Save Icon
Join a leading German energy innovator as a Streaming Data Platform Engineer. You will configure and optimize time-series databases (InfluxDB) and stream-processing pipelines (Kafka) using Python/Java. Leverage your expertise in Kubernetes, Azure, and CI/CD to build scalable telemetry infrastruct...
Location Icon
Location
Poland , Wroclaw; Warsaw; Szczecin; Cracow; Bialystok; Gdansk; Lodz
Salary Icon
Salary
150.00 - 190.00 PLN / Hour
spyro-soft.com Logo
Spyrosoft
Expiration Date
Until further notice
Integration Developer / Data Engineer
Save Icon
Join our team in Wroclaw as an Integration Developer/Data Engineer. You will design and maintain critical data flows between MS SQL and Microsoft Dataverse/Dynamics 365 using Azure Data Factory. The role requires expertise in T-SQL, ETL optimization, and Dataverse. Collaborate with teams to deliv...
Location Icon
Location
Poland , Wroclaw
Salary Icon
Salary
Not provided
eviden.com Logo
Eviden
Expiration Date
Until further notice
Senior Data Engineer
Save Icon
Join a global jewelry leader's transformation to a modern Azure PaaS data platform. As a Senior Data Engineer in Wroclaw, you'll design scalable data products using Synapse, Data Factory, and Databricks. Leverage your 5+ years of expertise to enable advanced analytics and drive commercial impact....
Location Icon
Location
Poland , Wroclaw
Salary Icon
Salary
Not provided
zoolatech.com Logo
Zoolatech
Expiration Date
Until further notice
Senior Data Engineer
Save Icon
Join a global jewelry leader's transformation to a modern Azure PaaS data platform. As a Senior Data Engineer in Wroclaw, you'll design scalable data products using Synapse, Data Factory, and Databricks. Leverage your 5+ years' expertise to enable advanced analytics and drive commercial impact. E...
Location Icon
Location
Poland , Wroclaw
Salary Icon
Salary
Not provided
zoolatech.com Logo
Zoolatech
Expiration Date
Until further notice
Senior Data Engineer
Save Icon
Join a new Data Engineering team at a leading European Online Fashion & Beauty Retailer. As a Senior Data Engineer in Wroclaw, you will migrate key GA4 pipelines to an internal Customer Behavior Source using PySpark, Airflow, and Databricks. Ensure data correctness and stability while working wit...
Location Icon
Location
Poland , Wroclaw
Salary Icon
Salary
Not provided
zoolatech.com Logo
Zoolatech
Expiration Date
Until further notice
Data Scientist / ML Engineer
Save Icon
Join Addepto as a Data Scientist/ML Engineer and build scalable AI solutions for global clients. Leverage your expertise in Python, ML libraries, and cloud platforms (AWS/Azure) on innovative projects. Enjoy flexible remote work from key Polish cities and accelerate your career with top-tier trai...
Location Icon
Location
Poland , Warsaw; Cracow; Wroclaw; Bialystok
Salary Icon
Salary
12600.00 - 28560.00 PLN / Month
addepto.com Logo
Addepto sp. z o.o.
Expiration Date
Until further notice
Data Engineer (Spark)
Save Icon
Join Addepto as a Data Engineer (Spark) and build scalable AI solutions for global clients. Leverage your 4+ years in Big Data, Python/Java/Scala, and cloud tech like Spark, Kafka, and AWS. Enjoy remote flexibility from key Polish cities while growing in a passionate AI team with top-tier projects.
Location Icon
Location
Poland , Warsaw; Cracow; Wroclaw; Bialystok
Salary Icon
Salary
15120.00 - 31920.00 PLN / Month
addepto.com Logo
Addepto sp. z o.o.
Expiration Date
Until further notice
Junior Data Engineer
Save Icon
Join our team as a Junior Data Engineer and build scalable data platforms using Databricks, Spark, and Azure. You will design ETL pipelines, implement CI/CD processes, and collaborate on international projects for top-tier clients. Enjoy flexible remote work from major Polish cities, a supportive...
Location Icon
Location
Poland , Warsaw; Cracow; Wroclaw; Bialystok
Salary Icon
Salary
8400.00 - 15120.00 PLN / Month
addepto.com Logo
Addepto sp. z o.o.
Expiration Date
Until further notice
Data Engineer
Save Icon
Join Addepto as a Data Engineer and build scalable AI & Big Data solutions for global clients like Porsche and Rolls Royce. Leverage Python, Databricks, Spark, and Azure to design robust data pipelines and governance processes. Enjoy flexible remote work from Poland, career growth, and projects w...
Location Icon
Location
Poland , Warsaw; Cracow; Wroclaw; Bialystok
Salary Icon
Salary
15120.00 - 28560.00 PLN / Month
addepto.com Logo
Addepto sp. z o.o.
Expiration Date
Until further notice
Are you a data architect with a passion for building robust, scalable systems? Your search for Data Engineer - PySpark jobs ends here. A Data Engineer specializing in PySpark is a pivotal role in the modern data ecosystem, responsible for constructing the foundational data infrastructure that powers analytics, machine learning, and business intelligence. These professionals are the master builders of the data world, transforming raw, unstructured data into clean, reliable, and accessible information for data scientists, analysts, and business stakeholders. If you are seeking jobs where you can work with cutting-edge big data technologies to solve complex data challenges at scale, this is your domain. In this profession, typical responsibilities revolve around the entire data pipeline lifecycle. Data Engineers design, develop, test, and maintain large-scale data processing systems. A core part of their daily work involves writing efficient, scalable code using PySpark, the Python library for Apache Spark, to perform complex ETL (Extract, Transform, Load) or ELT processes. They build and orchestrate data pipelines that ingest data from diverse sources—such as databases, APIs, and log files—into data warehouses like Snowflake or data lakes on cloud platforms like AWS, Azure, and GCP. Ensuring data quality and reliability is paramount; they implement robust data validation, monitoring, and observability frameworks to guarantee that data is accurate, timely, and trusted. Furthermore, they are tasked with optimizing the performance and cost of these data systems, fine-tuning Spark jobs for maximum efficiency, and automating deployment processes through CI/CD and Infrastructure as Code (IaC) practices. To excel in Data Engineer - PySpark jobs, a specific and powerful skill set is required. Mastery of Python and PySpark is non-negotiable, as it is the primary tool for distributed data processing. Profound knowledge of SQL is essential for data manipulation and querying. Experience with workflow orchestration tools like Apache Airflow is a common requirement to manage complex pipeline dependencies. A deep understanding of cloud data solutions (AWS, GCP, Azure) and platforms like Databricks is highly valued. Beyond technical prowess, successful candidates possess strong problem-solving abilities to debug and optimize data flows, a keen eye for system design and architecture, and excellent collaboration skills to work with cross-functional teams, including data scientists and business analysts. They are often expected to mentor junior engineers and contribute to establishing data engineering best practices and standards across an organization. If you are ready to build the future of data, explore the vast array of Data Engineer - PySpark jobs available and take the next step in your impactful career.

Filters

×
Category
Location
Work Mode
Salary