This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are looking for an experienced Data Engineer with deep expertise in Databricks to join our advanced analytics and data engineering team. The ideal candidate will play a key role in designing, building, and optimizing large-scale data solutions on the Databricks platform, supporting business intelligence, advanced analytics, and machine learning initiatives. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performance data pipelines and architectures.
Job Responsibility:
Lead the design, development, and deployment of scalable data pipelines and ETL processes using Databricks (Spark, Delta Lake, MLflow)
Architect and implement data lakehouse solutions, ensuring data quality, governance, and security
Optimize data workflows for performance and cost efficiency on Databricks and cloud platforms (Azure, AWS, or GCP)
Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights
Mentor and guide junior engineers, promoting best practices in data engineering and Databricks usage
Develop and maintain documentation, data models, and technical standards
Monitor, troubleshoot, and resolve issues in production data pipelines and environments
Stay current with emerging trends and technologies in data engineering and Databricks ecosystem
Requirements:
Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field
5+ years of experience in data engineering, with at least 2 years of hands-on experience with Databricks (including Spark, Delta Lake, and MLflow)
Strong proficiency in Python and/or Scala for data processing
Deep understanding of distributed data processing, data warehousing, and ETL concepts
Experience with cloud data platforms (Azure Data Lake, AWS S3, or Google Cloud Storage)
Solid knowledge of SQL and experience with large-scale relational and NoSQL databases
Familiarity with CI/CD, DevOps, and infrastructure-as-code practices for data engineering
Experience with data governance, security, and compliance in cloud environments
Excellent problem-solving, communication, and leadership skills
English: Upper Intermediate level or higher
What we offer:
Technical and non-technical training for professional and personal growth
Internal conferences and meetups to learn from industry experts
Support and mentorship from an experienced employee to help you professional grow and development
Internal startup incubator
Health insurance
English courses
Sports activities to promote a healthy lifestyle
Flexible work options, including remote and hybrid opportunities
Referral program for bringing in new talent
Work anniversary program and additional vacation days
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.