CrawlJobs Logo
Briefcase Icon
Category Icon

Data Engineer - Pyspark Poland, Lublin Jobs (Hybrid work)

2 Job Offers

Filters
Senior/Architect Data Engineer
Save Icon
Lead the architecture of a cutting-edge Databricks multi-agent processing engine. Utilize Mosaic AI, MLflow, and Unity Catalog to automate processes at scale. Expertise in real-time model serving, MLOps, and cloud governance on Azure/AWS is essential. Enjoy a hybrid model, cafeteria benefits, and...
Location Icon
Location
Poland , Warsaw; Poznań; Lublin; Katowice; Rzeszów
Salary Icon
Salary
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Data Engineer
Save Icon
Join Sollers as a Data Engineer to design and optimize scalable data pipelines using Snowflake, Databricks, and cloud tech (Azure/AWS). You'll need 3+ years of experience with ETL/ELT, SQL, Python, and PySpark. Enjoy flexible hybrid work, a clear career path, and offices across major Polish cities.
Location Icon
Location
Poland , Białystok; Gdańsk; Kraków; Lublin; Łódź; Poznań; Szczecin; Warsaw; Wrocław
Salary Icon
Salary
Not provided
sollers.eu Logo
Sollers Consulting
Expiration Date
Until further notice
Are you a data architect with a passion for building robust, scalable systems? Your search for Data Engineer - PySpark jobs ends here. A Data Engineer specializing in PySpark is a pivotal role in the modern data ecosystem, responsible for constructing the foundational data infrastructure that powers analytics, machine learning, and business intelligence. These professionals are the master builders of the data world, transforming raw, unstructured data into clean, reliable, and accessible information for data scientists, analysts, and business stakeholders. If you are seeking jobs where you can work with cutting-edge big data technologies to solve complex data challenges at scale, this is your domain. In this profession, typical responsibilities revolve around the entire data pipeline lifecycle. Data Engineers design, develop, test, and maintain large-scale data processing systems. A core part of their daily work involves writing efficient, scalable code using PySpark, the Python library for Apache Spark, to perform complex ETL (Extract, Transform, Load) or ELT processes. They build and orchestrate data pipelines that ingest data from diverse sources—such as databases, APIs, and log files—into data warehouses like Snowflake or data lakes on cloud platforms like AWS, Azure, and GCP. Ensuring data quality and reliability is paramount; they implement robust data validation, monitoring, and observability frameworks to guarantee that data is accurate, timely, and trusted. Furthermore, they are tasked with optimizing the performance and cost of these data systems, fine-tuning Spark jobs for maximum efficiency, and automating deployment processes through CI/CD and Infrastructure as Code (IaC) practices. To excel in Data Engineer - PySpark jobs, a specific and powerful skill set is required. Mastery of Python and PySpark is non-negotiable, as it is the primary tool for distributed data processing. Profound knowledge of SQL is essential for data manipulation and querying. Experience with workflow orchestration tools like Apache Airflow is a common requirement to manage complex pipeline dependencies. A deep understanding of cloud data solutions (AWS, GCP, Azure) and platforms like Databricks is highly valued. Beyond technical prowess, successful candidates possess strong problem-solving abilities to debug and optimize data flows, a keen eye for system design and architecture, and excellent collaboration skills to work with cross-functional teams, including data scientists and business analysts. They are often expected to mentor junior engineers and contribute to establishing data engineering best practices and standards across an organization. If you are ready to build the future of data, explore the vast array of Data Engineer - PySpark jobs available and take the next step in your impactful career.

Filters

×
Category
Location
Work Mode
Salary