This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Senior Data Engineer role designing, developing, and implementing cutting-edge data engineering solutions using modern big data and cloud technologies. Collaborates with product owners, data scientists, analysts, and technologists to deliver scalable, high-performance data products in an agile environment.
Job Responsibility:
Design and develop scalable big data solutions using platforms like Hadoop, Snowflake, or other modern data ecosystems
Collaborate with domain experts, product managers, analysts, and data scientists to build robust data pipelines
Lead migration of legacy workloads to cloud platforms (AWS, Azure, or GCP)
Develop and implement cloud-native solutions for data processing and storage
Partner with data scientists to build data pipelines from heterogeneous sources
Enable advanced analytics and machine learning workflows
Implement CI/CD pipelines to automate data engineering workflows
Research and evaluate open-source technologies
Mentor team members on big data and cloud technologies
Define and enforce coding standards and reusable components
Convert SAS-based pipelines into modern frameworks like PySpark, Scala, or Java
Optimize big data applications for performance and scalability
Analyze evolving business requirements and recommend enhancements
Ensure compliance with applicable laws, regulations, and organizational policies
Requirements:
8+ years of experience with Hadoop (Cloudera) and big data technologies
Advanced knowledge of Hadoop ecosystem including HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, and Solr
Proficiency in Java, Python, or Scala
Hands-on experience with Spark programming (PySpark, Scala, or Java)
Familiarity with Apache Beam
Experience with cloud platforms like AWS, Azure, or GCP
Expertise in designing and developing data pipelines for ingestion, transformation, and processing
Experience with Snowflake or Delta Lake
Hands-on experience with containerization tools like Docker and Kubernetes
Proficiency in DevOps practices including source control, CI/CD, and automated deployments
Experience with Python libraries for machine learning and data science workflows
Strong knowledge of data structures, algorithms, distributed storage, and compute systems
1+ year of SAS experience preferred
1+ year of Hadoop administration experience preferred
Strong problem-solving and analytical skills
Excellent interpersonal and teamwork abilities
Proven leadership experience including mentoring and managing a team
Bachelor's degree in Computer Science, Engineering, or related field (or equivalent experience)
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.