CrawlJobs Logo
Briefcase Icon
Category Icon

Data Engineer - Pyspark Poland, Warsaw Jobs (Hybrid work)

9 Job Offers

Filters
Junior Data Engineer
Save Icon
Join Bolt's Data Platform Ingestion team in Warsaw as a Junior Data Engineer. You'll build and maintain data pipelines using Spark, Kafka, and CDC, learning modern technologies. Enjoy a rewarding salary, stock options, wellness perks, and a collaborative environment. Help make data reliable and a...
Location Icon
Location
Poland , Warsaw
Salary Icon
Salary
Not provided
bolt.eu Logo
Bolt
Expiration Date
Until further notice
Regular Data Engineer
Save Icon
Join Inetum Polska as a Regular Data Engineer in Warsaw. Design and optimize ELT/ETL processes using Python, Apache Spark, Airflow, and SQL in an on-premise environment. Benefit from a hybrid model, flexible hours, a cafeteria system, and strong development support. Be part of a team driving digi...
Location Icon
Location
Poland , Warsaw
Salary Icon
Salary
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Senior Data Engineer
Save Icon
Join Inetum Polska as a Senior Data Engineer in Warsaw. Design and implement cutting-edge data solutions using Databricks, Spark, and cloud platforms (AWS/Azure/GCP). Lead strategic initiatives in a hybrid model with flexible hours, a cafeteria system, and strong development support.
Location Icon
Location
Poland , Warsaw
Salary Icon
Salary
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Data / Machine Learning Engineer
Save Icon
Join Inetum's strategic AI project in Warsaw as a Machine Learning Engineer. Design and implement ML models in Azure/GCP cloud environments using Python and frameworks like TensorFlow. Enjoy a hybrid model, flexible hours, and a benefits cafeteria. Shape cutting-edge AI solutions with a real busi...
Location Icon
Location
Poland , Warsaw
Salary Icon
Salary
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Senior AI Data Engineer
Save Icon
Lead AI innovation for a telecom client's Customer Care platform in this Technical Leader role. Design and deploy ML/NLP models using Python and Azure AI services within a scalable cloud architecture. Enjoy a hybrid model with flexible hours, a modern Warsaw office, and a personalized benefits pa...
Location Icon
Location
Poland , Warsaw
Salary Icon
Salary
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Senior/Architect Data Engineer
Save Icon
Lead the architecture of a cutting-edge Databricks multi-agent processing engine. Utilize Mosaic AI, MLflow, and Unity Catalog to automate processes at scale. Expertise in real-time model serving, MLOps, and cloud governance on Azure/AWS is essential. Enjoy a hybrid model, cafeteria benefits, and...
Location Icon
Location
Poland , Warsaw; Poznań; Lublin; Katowice; Rzeszów
Salary Icon
Salary
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Technical Leader - Data Engineering
Save Icon
Lead a high-performing Data Engineering team in Warsaw, utilizing your expertise in Azure/GCP, ETL/ELT, and Python/SQL. Drive architectural strategy, client engagement, and commercial outcomes in a hybrid role. Enjoy flexible hours, a benefits cafeteria, and a supportive culture at Inetum Polska.
Location Icon
Location
Poland , Warsaw
Salary Icon
Salary
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Senior Observability Engineer for Data Middleware
Save Icon
Join our team as a Senior Observability Engineer in Warsaw. You will provide engineering leadership for a global MeshIQ platform, focusing on Data Middleware like Kafka and MQ. The role requires expertise in Observability tools, automation, and DevOps practices. We offer a comprehensive benefits ...
Location Icon
Location
Poland , Warsaw
Salary Icon
Salary
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Data Engineer
Save Icon
Join Sollers as a Data Engineer to design and optimize scalable data pipelines using Snowflake, Databricks, and cloud tech (Azure/AWS). You'll need 3+ years of experience with ETL/ELT, SQL, Python, and PySpark. Enjoy flexible hybrid work, a clear career path, and offices across major Polish cities.
Location Icon
Location
Poland , Białystok; Gdańsk; Kraków; Lublin; Łódź; Poznań; Szczecin; Warsaw; Wrocław
Salary Icon
Salary
Not provided
sollers.eu Logo
Sollers Consulting
Expiration Date
Until further notice
Are you a data architect with a passion for building robust, scalable systems? Your search for Data Engineer - PySpark jobs ends here. A Data Engineer specializing in PySpark is a pivotal role in the modern data ecosystem, responsible for constructing the foundational data infrastructure that powers analytics, machine learning, and business intelligence. These professionals are the master builders of the data world, transforming raw, unstructured data into clean, reliable, and accessible information for data scientists, analysts, and business stakeholders. If you are seeking jobs where you can work with cutting-edge big data technologies to solve complex data challenges at scale, this is your domain. In this profession, typical responsibilities revolve around the entire data pipeline lifecycle. Data Engineers design, develop, test, and maintain large-scale data processing systems. A core part of their daily work involves writing efficient, scalable code using PySpark, the Python library for Apache Spark, to perform complex ETL (Extract, Transform, Load) or ELT processes. They build and orchestrate data pipelines that ingest data from diverse sources—such as databases, APIs, and log files—into data warehouses like Snowflake or data lakes on cloud platforms like AWS, Azure, and GCP. Ensuring data quality and reliability is paramount; they implement robust data validation, monitoring, and observability frameworks to guarantee that data is accurate, timely, and trusted. Furthermore, they are tasked with optimizing the performance and cost of these data systems, fine-tuning Spark jobs for maximum efficiency, and automating deployment processes through CI/CD and Infrastructure as Code (IaC) practices. To excel in Data Engineer - PySpark jobs, a specific and powerful skill set is required. Mastery of Python and PySpark is non-negotiable, as it is the primary tool for distributed data processing. Profound knowledge of SQL is essential for data manipulation and querying. Experience with workflow orchestration tools like Apache Airflow is a common requirement to manage complex pipeline dependencies. A deep understanding of cloud data solutions (AWS, GCP, Azure) and platforms like Databricks is highly valued. Beyond technical prowess, successful candidates possess strong problem-solving abilities to debug and optimize data flows, a keen eye for system design and architecture, and excellent collaboration skills to work with cross-functional teams, including data scientists and business analysts. They are often expected to mentor junior engineers and contribute to establishing data engineering best practices and standards across an organization. If you are ready to build the future of data, explore the vast array of Data Engineer - PySpark jobs available and take the next step in your impactful career.

Filters

×
Countries
Category
Location
Work Mode
Salary