This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We’re hiring a Senior Data Engineer with strong experience in AWS and Databricks to build scalable data solutions that power next-gen AI and machine learning. Join our fast-growing team to work on impactful projects, collaborate with top talent, and drive innovation at scale.
Job Responsibility:
Design, build, and manage large-scale data infrastructures using a variety of AWS technologies such as Amazon Redshift, AWS Glue, Amazon Athena, AWS Data Pipeline, Amazon Kinesis, Amazon EMR, and Amazon RDS
Design, develop, and maintain scalable data pipelines and architectures on Databricks using tools such as Delta Lake, Unity Catalog, and Apache Spark (Python or Scala), or similar technologies
Integrate Databricks with cloud platforms like AWS to ensure smooth and secure data flow across systems
Build and automate CI/CD pipelines for deploying, testing, and monitoring Databricks workflows and data jobs
Continuously optimize data workflows for performance, reliability, and security, applying Databricks best practices around data governance and quality
Ensure the performance, availability, and security of datasets across the organization, utilizing AWS’s robust suite of tools for data management
Collaborate with data scientists, software engineers, product managers, and other key stakeholders to develop data-driven solutions and models
Translate complex functional and technical requirements into detailed design proposals and implement them
Mentor junior and mid-level data engineers, fostering a culture of continuous learning and improvement within the team
Identify, troubleshoot, and resolve complex data-related issues
Champion best practices in data management, ensuring the cleanliness, integrity, and accessibility of our data
Optimize and fine-tune data queries and processes for performance. Evaluate and advise on technological components, such as software, hardware, and networking capabilities, for database management systems and infrastructure
Stay informed on the latest industry trends and technologies to ensure our data infrastructure is modern and robust.
Requirements:
5-7 years of hands-on experience with AWS data engineering technologies, such as Amazon Redshift, AWS Glue, AWS Data Pipeline, Amazon Kinesis, Amazon RDS, and Apache Airflow
Hands-on experience working with Databricks, including Delta Lake, Apache Spark (Python or Scala), and Unity Catalog
Demonstrated proficiency in SQL and NoSQL databases, ETL tools, and data pipeline workflows
Experience with Python, and/or Java
Deep understanding of data structures, data modeling, and software architecture
Strong problem-solving skills and attention to detail
Self-motivated and able to work independently, with excellent organizational and multitasking skills
Exceptional communication skills, with the ability to explain complex data concepts to non-technical stakeholders
Bachelor's Degree in Computer Science, Information Systems, or a related field. A Master's Degree is preferred.
Nice to have:
Experience with AI and machine learning technologies is highly desirable.
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.