This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking an experienced and passionate Senior Data Engineer to join our dynamic data engineering team. The ideal candidate will have deep expertise in building scalable data pipelines and distributed data processing systems using Python, Apache Spark, Kafka, and Snowflake/Databricks. This role involves designing, developing, and optimizing high-performance data solutions that enable advanced analytics and business intelligence across the organization.
Job Responsibility:
Design, build, and maintain robust, scalable data pipelines using Python and Apache Spark
Integrate and process large volumes of data from diverse sources using Kafka and other streaming technologies
Develop and optimize ETL/ELT workflows for structured and unstructured data
Work with Snowflake and Databricks to enable efficient data storage, transformation, and analysis
Implement data quality, validation, and monitoring frameworks to ensure accuracy and reliability
Collaborate with data scientists, analysts, and business stakeholders to translate requirements into scalable data solutions
Optimize data workflows for performance, scalability, and cost efficiency in cloud environments (AWS/Azure)
Stay current with emerging technologies in data engineering and contribute to continuous improvement initiatives
Ensure compliance with data governance, security, and privacy standards
Requirements:
7–8 years of hands-on experience in data engineering or a related technical field
Strong programming proficiency in Python for data manipulation, automation, and pipeline development
Proven experience with Apache Spark or other distributed data processing frameworks (e.g., Hadoop)
Hands-on experience with Snowflake and/or Databricks for cloud-based data warehousing and analytics
Experience with Kafka or similar message queues/streaming platforms
Familiarity with cloud data platforms such as AWS or Azure and their associated data services (e.g., S3, Glue, Data Factory)
Solid understanding of data warehousing concepts, ETL design patterns, and data modeling
Strong problem-solving, analytical, and debugging skills
Excellent communication, collaboration, and stakeholder management skills
Ability to work effectively both independently and in geographically distributed teams
Nice to have:
Experience with workflow orchestration tools such as Airflow or Prefect
Knowledge of CI/CD pipelines for data engineering and DevOps practices
Familiarity with containerization (Docker, Kubernetes)
Exposure to real-time analytics and data lakehouse architectures
Experience in financial services, e-commerce, or large-scale enterprise data ecosystems
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.