This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Senior Data Engineer – Dublin (Hybrid) Contract Role | 3 Days Onsite. We are seeking an experienced Senior Data Engineer to join our high-performing data team in Dublin. This is an exciting opportunity for a contractor who thrives in complex, large-scale environments and enjoys working with modern data platforms. In this role, you will design and optimize critical data pipelines across the Hadoop and Spark ecosystem, work extensively with Snowflake and Databricks, and contribute to the transformation of our big-data environment into a more scalable, cloud-ready architecture.
Job Responsibility:
Build, enhance, and maintain large-scale ETL/ELT pipelines using Hadoop ecosystem tools including HDFS, Hive, Impala, and Oozie/Airflow
Develop distributed data processing solutions with PySpark, Spark SQL, Scala, or Python to support complex data transformations
Implement scalable and secure data ingestion frameworks to support both batch and streaming workloads
Work hands-on with Snowflake to design performant data models, optimize queries, and establish solid data governance practices
Collaborate on the migration and modernization of current big-data workloads to cloud-native platforms and Databricks
Tune Hadoop, Spark, and Snowflake systems for performance, storage efficiency, and reliability
Apply best practices in data modelling, partitioning strategies, and job orchestration for large datasets
Integrate metadata management, lineage tracking, and governance standards across the platform
Build automated validation frameworks to ensure accuracy, completeness, and reliability of data pipelines
Develop unit, integration, and end-to-end testing for ETL workflows using Python, Spark, and dbt testing where applicable
Work with cloud services such as AWS S3, Glue, Lambda, Athena, EMR, Redshift, or equivalent platforms
Support the adoption and expansion of the Databricks Data & AI Platform, contributing to Delta Lake implementation, job scheduling, and ML-ready pipelines
Participate in cloud modernization efforts and architectural redesigns focused on performance and cost efficiency
Partner closely with product, engineering, and data science teams to ensure quality and performance across the data lifecycle
Participate in agile ceremonies and improve sprint planning with data-driven insights
Mentor junior engineers, promote best practices, and help shape the long-term technical roadmap
Contribute to incident response, troubleshooting, and root-cause analysis for production issues.
Requirements:
7+ years of experience as a Data Engineer working with distributed data systems
4+ years of deep Snowflake experience, including performance tuning, SQL optimization, and data modelling
Strong hands-on experience with the Hadoop ecosystem: HDFS, Hive, Impala, Spark (PySpark preferred)
Oozie, Airflow, or similar orchestration tools
Proven expertise with PySpark, Spark SQL, and large-scale data processing patterns
Experience with Databricks and Delta Lake (or equivalent big-data platforms)
Strong programming background in Python, Scala, or Java
Experience with cloud services (AWS preferred): S3, Glue, EMR, Redshift, Lambda, Athena, etc.
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.