CrawlJobs Logo
Briefcase Icon
Category Icon

Filters

×
Countries

Python/Pyspark Engineer Slovakia Jobs

1 Job Offers

Filters
Python/Pyspark Engineer
Save Icon
Join our team in Bratislava as a Python/PySpark Engineer. You will develop a modern Azure Lakehouse architecture, building complex data pipelines for the insurance sector. We seek an engineer with 4+ years of Python/SQL experience and PySpark proficiency. Work in a global Agile team, designing so...
Location Icon
Location
Slovakia , Bratislava
Salary Icon
Salary
Not provided
signifytechnology.com Logo
Signify Technology
Expiration Date
Until further notice
Explore the dynamic and in-demand field of Python/PySpark Engineer jobs, a critical role at the intersection of data engineering, software development, and big data analytics. Professionals in this career specialize in designing, building, and maintaining robust, scalable data processing systems and pipelines. They leverage the powerful combination of Python, a versatile and widely-used programming language, and PySpark, the Python API for Apache Spark, to handle vast volumes of data efficiently. The core mission of a Python/PySpark Engineer is to transform raw, often disparate data into clean, structured, and accessible formats that fuel data-driven decision-making, advanced analytics, and machine learning applications. Typical responsibilities for these engineers are comprehensive and pivotal to modern data infrastructure. They commonly design and implement data transformation workflows, ensuring data quality, consistency, and reliability. A significant part of their role involves developing and optimizing complex ETL (Extract, Transform, Load) or ELT processes within cloud-based data platforms or data lakehouse architectures. They build backend services and applications that integrate data from multiple sources, enabling cohesive business intelligence. Collaboration is key; these engineers frequently work alongside data scientists, analysts, product owners, and other engineers to understand requirements and deliver solutions that meet specific business needs. Furthermore, they are responsible for writing production-grade, maintainable code, conducting peer reviews, implementing automation, and ensuring the performance and sustainability of data applications through established software engineering best practices. To succeed in Python/PySpark Engineer jobs, a specific skill set is essential. Proficiency in Python programming is fundamental, coupled with deep, hands-on experience with the PySpark framework for distributed data processing. Strong SQL skills are mandatory for data querying and manipulation. A solid understanding of big data concepts, distributed computing principles, and cloud platforms (such as AWS, Azure, or GCP) is highly valuable. Familiarity with data modeling, data warehousing/lakehouse concepts, and workflow orchestration tools (like Airflow) is often expected. Beyond technical prowess, strong problem-solving abilities, effective communication skills to bridge technical and non-technical domains, and experience working in Agile/Scrum methodologies are common requirements. Most positions seek candidates with a bachelor's degree in computer science or a related field and several years of relevant project experience. For those passionate about harnessing the power of big data, Python/PySpark Engineer jobs offer a challenging and rewarding career path building the foundational systems of the information economy.

Filters

×
Countries
Category
Location
Work Mode
Salary