CrawlJobs Logo

Regular Data Engineer

https://www.inetum.com Logo

Inetum

Location Icon

Location:
Poland, Warsaw

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Inetum Polska is part of the global Inetum Group and plays a key role in driving the digital transformation of businesses and public institutions. Operating in cities such as Warsaw, Poznan, Katowice, Lublin, Rzeszow, Lodz the company offers a wide range of IT services. Inetum Polska actively supports employee development by fully funding training, certifications, and participation in technology conferences. Additionally, the company is involved in local social initiatives, such as charitable projects and promoting an active lifestyle. It prides itself on fostering a diverse and inclusive work environment, ensuring equal opportunities for all.

Job Responsibility:

  • Design, develop, and implement efficient ELT/ETL processes for large datasets
  • Build and optimize data processing workflows using Apache Spark
  • Utilize Python for data manipulation, transformation, and analysis
  • Develop and manage data pipelines using Apache Airflow
  • Write and optimize SQL queries for data extraction, transformation, and loading
  • Collaborate with data scientists, analysts, and other engineers to understand data requirements and deliver effective solutions
  • Work within an on-premise computing environment for data processing and storage
  • Ensure data quality, integrity, and performance throughout the data lifecycle
  • Participate in the implementation and maintenance of CI/CD pipelines for data processes
  • Utilize Git for version control and collaborative development
  • Troubleshoot and resolve issues related to data pipelines and infrastructure
  • Contribute to the documentation of data processes and systems

Requirements:

  • Minimum 2 years of professional experience as a programmer working with large datasets
  • Experience in at least 1 project involving the processing of large datasets
  • Experience in at least 1 project programming with Python
  • Experience in at least 1 project within an on-premise computing environment
  • Proven experience programming with Apache Spark
  • Proven experience programming with Python
  • Proven experience programming with Apache Airflow
  • Proven experience programming with SQL
  • Familiarity with Hadoop concepts
  • Proven experience in programming ELT/ETL processes
  • Understanding of CI/CD principles and practices
  • Proficiency in using a version control system (Git)
  • Strong self-organization skills and a goal-oriented approach
  • Excellent interpersonal and organizational skills, including planning
  • Strong communication, creativity, independence, professionalism, stress resistance, and inquisitiveness
  • Adaptability and flexibility, with an openness to continuous learning and development
What we offer:
  • Flexible working hours
  • Hybrid work model
  • Cafeteria system
  • Generous referral bonuses
  • Additional revenue sharing opportunities
  • Ongoing guidance from a dedicated Team Manager
  • Tailored technical mentoring
  • Dedicated team-building budget
  • Opportunities to participate in charitable initiatives and local sports programs
  • Supportive and inclusive work culture

Additional Information:

Job Posted:
April 25, 2025

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.