This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As a Lead Data Engineer at Rearc, you'll play a pivotal role in establishing and maintaining technical excellence within our data engineering team. Your deep expertise in data architecture, ETL processes, and data modelling will be instrumental in optimizing data workflows for efficiency, scalability, and reliability. You'll collaborate closely with cross-functional teams to design and implement robust data solutions that meet business objectives and adhere to best practices in data management. Building strong partnerships with both technical teams and stakeholders will be essential as you drive data-driven initiatives and ensure their successful implementation.
Job Responsibility:
Understand Requirements and Challenges: Collaborate with stakeholders to deeply understand their data requirements and challenges
Implement with a DataOps Mindset: Embrace a DataOps mindset and utilize modern data engineering tools and frameworks, such as Apache Airflow, Apache Spark, or similar, to build scalable and efficient data pipelines and architectures
Lead Data Engineering Projects: Take the lead in managing and executing data engineering projects, providing technical guidance and oversight to ensure successful project delivery
Mentor Data Engineers: Share your extensive knowledge and experience in data engineering with junior team members, guiding and mentoring them to foster their growth and development in the field
Promote Knowledge Sharing: Contribute to our knowledge base by writing technical blogs and articles, promoting best practices in data engineering, and contributing to a culture of continuous learning and innovation
Requirements:
10+ years of experience in data engineering, data architecture, or related fields
Extensive experience in writing and testing Java and/or Python
Proven experience with data pipeline orchestration using platforms such as Airflow, Databricks, DBT or AWS Glue
Hands-on experience with data analysis tools and libraries like Pyspark, NumPy, Pandas, or Dask
Proficiency with Spark and Databricks is highly desirable
Proven track record of leading complex data engineering projects, including designing and implementing scalable data solutions
Hands-on experience with ETL processes, data warehousing, and data modeling tools
In-depth knowledge of data integration tools and best practices
Strong understanding of cloud-based data services and technologies (e.g., AWS Redshift, Azure Synapse Analytics, Google BigQuery)
Strong strategic and analytical skills
Proven proficiency in implementing and optimizing data pipelines using modern tools and frameworks, including Databricks for data processing and Delta Lake for managing large-scale data lakes
Exceptional communication and interpersonal skills
Welcome to CrawlJobs.com – Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.
We use cookies to enhance your experience, analyze traffic, and serve personalized content. By clicking “Accept”, you agree to the use of cookies.