CrawlJobs Logo

Regular Data Engineer

https://www.inetum.com Logo

Inetum

Location Icon

Location:
Poland , Warsaw

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Inetum Polska is part of the global Inetum Group and plays a key role in driving the digital transformation of businesses and public institutions. Operating in cities such as Warsaw, Poznan, Katowice, Lublin, Rzeszow, Lodz the company offers a wide range of IT services. Inetum Polska actively supports employee development by fully funding training, certifications, and participation in technology conferences. Additionally, the company is involved in local social initiatives, such as charitable projects and promoting an active lifestyle. It prides itself on fostering a diverse and inclusive work environment, ensuring equal opportunities for all.

Job Responsibility:

  • Design, develop, and implement efficient ELT/ETL processes for large datasets
  • Build and optimize data processing workflows using Apache Spark
  • Utilize Python for data manipulation, transformation, and analysis
  • Develop and manage data pipelines using Apache Airflow
  • Write and optimize SQL queries for data extraction, transformation, and loading
  • Collaborate with data scientists, analysts, and other engineers to understand data requirements and deliver effective solutions
  • Work within an on-premise computing environment for data processing and storage
  • Ensure data quality, integrity, and performance throughout the data lifecycle
  • Participate in the implementation and maintenance of CI/CD pipelines for data processes
  • Utilize Git for version control and collaborative development
  • Troubleshoot and resolve issues related to data pipelines and infrastructure
  • Contribute to the documentation of data processes and systems

Requirements:

  • Minimum 2 years of professional experience as a programmer working with large datasets
  • Experience in at least 1 project involving the processing of large datasets
  • Experience in at least 1 project programming with Python
  • Experience in at least 1 project within an on-premise computing environment
  • Proven experience programming with Apache Spark
  • Proven experience programming with Python
  • Proven experience programming with Apache Airflow
  • Proven experience programming with SQL
  • Familiarity with Hadoop concepts
  • Proven experience in programming ELT/ETL processes
  • Understanding of CI/CD principles and practices
  • Proficiency in using a version control system (Git)
  • Strong self-organization skills and a goal-oriented approach
  • Excellent interpersonal and organizational skills, including planning
  • Strong communication, creativity, independence, professionalism, stress resistance, and inquisitiveness
  • Adaptability and flexibility, with an openness to continuous learning and development
What we offer:
  • Flexible working hours
  • Hybrid work model
  • Cafeteria system
  • Generous referral bonuses
  • Additional revenue sharing opportunities
  • Ongoing guidance from a dedicated Team Manager
  • Tailored technical mentoring
  • Dedicated team-building budget
  • Opportunities to participate in charitable initiatives and local sports programs
  • Supportive and inclusive work culture

Additional Information:

Job Posted:
April 25, 2025

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Regular Data Engineer

Data Engineering Manager

We are looking for a talented Data Engineering Manager with over 8 years of expe...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://6sense.com Logo
6sense
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field
  • Minimum of 8 years of experience in data engineering or a related field, with at least 3 years in a managerial role
  • Strong expertise in big data technologies including Apache Spark, Hadoop, Hive, and related tools
  • Proficiency in programming languages such as Python and Java
  • experience with Scala is a plus
  • Proven track record of designing and implementing scalable data systems and pipelines
  • Excellent leadership, communication, and interpersonal skills
  • Strong problem-solving and analytical abilities and a proactive approach to addressing challenges
  • Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and data warehousing solutions (e.g., Snowflake, Redshift) is a plus.
Job Responsibility
Job Responsibility
  • Lead, mentor and manage a team of data engineers, and tech leads
  • Build, hire and grow the team
  • Collaborate with product managers, data analysts and other stakeholders to understand data requirements, plan and deliver high quality data solutions
  • Manage project timelines, resources, and deliverables to meet business objectives by managing all cross-functional and cross-team collaboration and dependencies
  • Design, implement, and optimize scalable data pipelines and data processing systems using big data technologies like Apache Spark and Hadoop
  • Ensure data quality, consistency, and security across all data systems
  • Relentlessly pursue goals on data coverage, data freshness, data quality and key performance & SLA metrics
  • Have high technical competence and background with track record of individual technical accomplishments
  • Play the role of the architect for the team
  • Drive continuous improvement by identifying and implementing best practices, tools and processes for development and execution, and champion their adoption
What we offer
What we offer
  • Health coverage
  • Paid parental leave
  • Generous paid time-off and holidays
  • Quarterly self-care days off
  • Stock options
  • Equipment and support for remote or onsite work
  • Learning and development initiatives including LinkedIn Learning
  • Quarterly wellness education sessions
  • ERG-hosted events.
  • Fulltime
Read More
Arrow Right

Senior Azure Data Engineer

Seeking a Lead AI DevOps Engineer to oversee design and delivery of advanced AI/...
Location
Location
Poland
Salary
Salary:
Not provided
lingarogroup.com Logo
Lingaro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 6 years of professional experience in the Data & Analytics area
  • 1+ years of experience (or acting as) in the Senior Consultant or above role with a strong focus on data solutions build in Azure and Databricks/Synapse/(MS Fabric is nice to have)
  • Proven experience in Azure cloud-based infrastructure, Databricks and one of SQL implementation (e.g., Oracle, T-SQL, MySQL, etc.)
  • Proficiency in programming languages such as SQL, Python, PySpark is essential (R or Scala nice to have)
  • Very good level of communication including ability to convey information clearly and specifically to co-workers and business stakeholders
  • Working experience in the agile methodologies – supporting tools (JIRA, Azure DevOps)
  • Experience in leading and managing a team of data engineers, providing guidance, mentorship, and technical support
  • Knowledge of data management principles and best practices, including data governance, data quality, and data integration
  • Good project management skills, with the ability to prioritize tasks, manage timelines, and deliver high-quality results within designated deadlines
  • Excellent problem-solving and analytical skills, with the ability to identify and resolve complex data engineering issues
Job Responsibility
Job Responsibility
  • Act as a senior member of the Data Science & AI Competency Center, AI Engineering team, guiding delivery and coordinating workstreams
  • Develop and execute a cloud data strategy aligned with organizational goals
  • Lead data integration efforts, including ETL processes, to ensure seamless data flow
  • Implement security measures and compliance standards in cloud environments
  • Continuously monitor and optimize data solutions for cost-efficiency
  • Establish and enforce data governance and quality standards
  • Leverage Azure services, as well as tools like dbt and Databricks, for efficient data pipelines and analytics solutions
  • Work with cross-functional teams to understand requirements and provide data solutions
  • Maintain comprehensive documentation for data architecture and solutions
  • Mentor junior team members in cloud data architecture best practices
What we offer
What we offer
  • Stable employment
  • “Office as an option” model
  • Workation
  • Great Place to Work® certified employer
  • Flexibility regarding working hours and your preferred form of contract
  • Comprehensive online onboarding program with a “Buddy” from day 1
  • Cooperation with top-tier engineers and experts
  • Unlimited access to the Udemy learning platform from day 1
  • Certificate training programs
  • Upskilling support
Read More
Arrow Right

Tableau Data Engineer

A RDS Tableau data engineer is responsible for designing, building and maintaini...
Location
Location
Portugal , Porto
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Relevant professional experience as a data engineer is mandatory (3-5 years)
  • Knowledge of ETL (Extract, Transform and Load) data integration process is mandatory
  • Proficiency in SQL is mandatory
  • Proficiency in Python is mandatory
  • Proficiency in Tableau is mandatory
  • Fluent in English
Job Responsibility
Job Responsibility
  • Designing, building and maintaining the data infrastructure that supports Tableau-based analytics and reporting
  • Integrating data from various sources, transforming it into usable formats and creating efficient data models for analysis
  • Developing and maintaining Tableau dashboards and reports, ensuring data accuracy and collaborating with stakeholders to deliver data-driven insights
  • Work closely with business stakeholders to understand their data needs and reporting requirements
  • Support the project manager on all Tableau Server and Tableau Desktop related subjects
  • Follow-up of both design and production teams on the tool's implementation
  • Provide technical support to the development team
  • Configuration of the solution on non-production environments
  • Regular upgrade of Tableau Server version
  • Support the business lines on the handling of Tableau solutions and implementation of best practices
  • Fulltime
Read More
Arrow Right

Data Engineer

We are looking for a Data Engineer with a collaborative, “can-do” attitude who i...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred
  • 3+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
  • 3+ years of experience with setting up and operating data pipelines using Python or SQL
  • 3+ years of advanced SQL Programming: PL/SQL, T-SQL
  • 3+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization
  • Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
  • 3+ years of strong and extensive hands-on experience in Azure, preferably data heavy/analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
  • 3+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
  • 3+ years of experience in defining and enabling data quality standards for auditing, and monitoring
  • Strong analytical abilities and a strong intellectual curiosity
Job Responsibility
Job Responsibility
  • Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals
  • Demonstrate technical and domain knowledge of relational and non-relational databases, Data Warehouses, Data lakes among other structured and unstructured storage options
  • Determine solutions that are best suited to develop a pipeline for a particular data source
  • Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development
  • Efficient in ELT/ETL development using Azure cloud services and Snowflake, including Testing and operational support (RCA, Monitoring, Maintenance)
  • Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics deliver
  • Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders
  • Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability)
  • Stay current with and adopt new tools and applications to ensure high quality and efficient solutions
  • Build cross-platform data strategy to aggregate multiple sources and process development datasets
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a Senior Data Engineer with a collaborative, “can-do” attitud...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s Degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred
  • 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
  • 5+ years of experience with setting up and operating data pipelines using Python or SQL
  • 5+ years of advanced SQL Programming: PL/SQL, T-SQL
  • 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization
  • Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
  • 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
  • 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
  • 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring
  • Strong analytical abilities and a strong intellectual curiosity
Job Responsibility
Job Responsibility
  • Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals
  • Demonstrate deep technical and domain knowledge of relational and non-relation databases, Data Warehouses, Data lakes among other structured and unstructured storage options
  • Determine solutions that are best suited to develop a pipeline for a particular data source
  • Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development
  • Efficient in ETL/ELT development using Azure cloud services and Snowflake, Testing and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance)
  • Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics delivery
  • Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders
  • Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability)
  • Stay current with and adopt new tools and applications to ensure high quality and efficient solutions
  • Build cross-platform data strategy to aggregate multiple sources and process development datasets
  • Fulltime
Read More
Arrow Right

Data Engineer

Join us as a Data Engineer responsible for supporting the successful delivery of...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
barclays.co.uk Logo
Barclays
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Hands on experience in pyspark and strong knowledge on Dataframes, RDD and SparkSQL
  • Hands on Experience in developing, testing and maintaining applications on AWS Cloud
  • Strong hold on AWS Data Analytics Technology Stack (Glue, S3, Lambda, Lake formation, Athena)
  • Design and implement scalable and efficient data transformation/storage solutions using Snowflake
  • Experience in Data ingestion to Snowflake for different storage format such Parquet, Iceberg, JSON, CSV etc
  • Experience in using DBT (Data Build Tool) with snowflake for ELT pipeline development
  • Experience in Writing advanced SQL and PL SQL programs
  • Hands On Experience for building reusable components using Snowflake and AWS Tools/Technology
  • Should have worked at least on two major project implementations
Job Responsibility
Job Responsibility
  • Investigation and analysis of data issues related to quality, lineage, controls, and authoritative source identification, documenting data sources, methodologies, and quality findings with recommendations for improvement
  • Designing and building data pipelines to automate data movement and processing
  • Apply advanced analytical techniques to large datasets to uncover trends and correlations, develop validated logical data models, and translate insights into actionable business recommendations that drive operational and process improvements, leveraging machine learning/AI
  • Through data-driven analysis, translate analytical findings into actionable business recommendations, identifying opportunities for operational and process improvements
  • Design and create interactive dashboards and visual reports using applicable tools and automate reporting processes for regular and ad-hoc stakeholder needs
What we offer
What we offer
  • Wellness rooms
  • on-site cafeterias
  • fitness centers
  • tech-equipped workstations
  • Fulltime
Read More
Arrow Right

Data Engineer

A career in Data & Analytics at Barclays is a hub for top talent, from beginners...
Location
Location
India , Pune
Salary
Salary:
Not provided
barclays.co.uk Logo
Barclays
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Hands on experience in pyspark and strong knowledge on Dataframes, RDD and SparkSQL
  • Hands on Experience in developing, testing and maintaining applications on AWS Cloud
  • Strong hold on AWS Data Analytics Technology Stack (Glue, S3, Lambda, Lake formation, Athena)
  • Design and implement scalable and efficient data transformation/storage solutions using Snowflake
  • Experience in Data ingestion to Snowflake for different storage format such Parquet, Iceberg, JSON, CSV etc
  • Experience in using DBT (Data Build Tool) with snowflake for ELT pipeline development
  • Experience in Writing advanced SQL and PL SQL programs
  • Hands On Experience for building reusable components using Snowflake and AWS Tools/Technology
  • Should have worked at least on two major project implementations
Job Responsibility
Job Responsibility
  • Support the successful delivery of Location Strategy projects to plan, budget, agreed quality and governance standards
  • Spearhead the evolution of our digital landscape, driving innovation and excellence
  • Harness cutting-edge technology to revolutionise our digital offerings, ensuring unparalleled customer experiences
  • Investigation and analysis of data issues related to quality, lineage, controls, and authoritative source identification, documenting data sources, methodologies, and quality findings with recommendations for improvement
  • Designing and building data pipelines to automate data movement and processing
  • Apply advanced analytical techniques to large datasets to uncover trends and correlations, develop validated logical data models, and translate insights into actionable business recommendations that drive operational and process improvements, leveraging machine learning/AI
  • Through data-driven analysis, translate analytical findings into actionable business recommendations, identifying opportunities for operational and process improvements
  • Design and create interactive dashboards and visual reports using applicable tools and automate reporting processes for regular and ad-hoc stakeholder needs
What we offer
What we offer
  • Hybrid working
  • Structured approach to hybrid working with fixed, ‘anchor’, days onsite
  • Supportive and inclusive culture and environment
  • Commitment to flexible working arrangements
  • International scale offering incredible variety, depth and breadth of experience
  • Chance to learn from a globally diverse mix of colleagues
  • Encouragement to embrace mobility and explore every part of operations
  • Fulltime
Read More
Arrow Right

Qlik Data Engineer

This position is NOT eligible for visa sponsorship. This role will specialize in...
Location
Location
United States , Easton
Salary
Salary:
Not provided
victaulic.com Logo
Victaulic
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Information Systems, or related technical field
  • 4+ years of experience in enterprise data integration with at least 2 years of hands-on Qlik or Talend experience
  • Strong understanding of change data capture (CDC) technologies and real-time data streaming concepts
  • Strong understanding of data lake and data warehouse strategies, and data modelling
  • Advanced SQL skills with expertise in database replication, synchronization, and performance tuning
  • Experience with enterprise ETL/ELT tools and data integration patterns
  • Proficiency in at least one programming language (Java, Python, or SQL scripting)
Job Responsibility
Job Responsibility
  • Develop and maintain ETL/ELT data pipelines leveraging Qlik Data Integration for data warehouse generation in bronze, silver, gold layers
  • Build consumer facing datamarts, views, and push-down calculations to enable improved analytics by BI team and Citizen Developers
  • Implement enterprise data integration patterns supporting batch, real-time, and hybrid processing requirements
  • Coordinate execution of and monitor pipelines to ensure timely reload of EDW
  • Configure and manage Qlik Data Integration components including pipeline projects, lineage, data catalog, data quality, and data marketplace
  • Implement data quality rules and monitoring using Qlik and Talend tools
  • Manage Qlik Tenant, security, access and manage Data Movement Gate way
  • Monitor and optimize data replication performance, latency, and throughput across all integration points
  • Implement comprehensive logging, alerting, and performance monitoring
  • Conduct regular performance audits and capacity planning for integration infrastructure
Read More
Arrow Right