This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are seeking a highly skilled AWS Big Data Architect / Senior Data Engineer to design, develop, and deliver scalable Big Data Warehouse solutions. This is a hands-on role suited for someone who is passionate about technology, thrives in a collaborative environment, and can work effectively with both technical and non-technical stakeholders. The ideal candidate excels in fast-paced settings and is committed to producing high-quality, impactful results. This role offers the opportunity to collaborate with engineering teams across the enterprise and influence broader data and technology strategies.
Job Responsibility:
Design and develop scalable Big Data Warehouse solutions across the full data supply chain
Build and implement metadata management solutions
Create and maintain technical documentation, user documentation, data models, data dictionaries, glossaries, process flows, and architecture diagrams
Enhance and expand the enterprise Data Lake environment
Solve complex data integration challenges across multiple systems
Design and execute strategies for real-time data analysis and decision-making
Collaborate with business partners, analysts, developers, architects, and engineers to support ongoing data quality initiatives
Work closely with Data Science teams to improve actionable insights
Continuously expand knowledge of new tools, platforms, and technologies
Requirements:
Strong background in data management, data access, Big Data, Data Marts, and Data Warehousing
Proficiency in SQL, Spark SQL, and DataFrames
Experience with modern data warehousing concepts using technologies such as Redshift, Spark, Hadoop, and web services
Experience in data architecture and data assembly
Knowledge of Data Governance and Data Security practices
Hands-on experience with data integration tools (e.g., Talend preferred
Cascading a plus)
Experience with scripting languages for data manipulation
Experience with Business Intelligence tools, MDM, XML, SOA/Web Services
Exposure to Data Science technologies and toolsets
Bachelor’s or Master’s degree in Computer Science, Data Processing, or equivalent work experience
Strong background in Data Warehousing or related analytical environments
Proficiency in Java programming and building frameworks
Hands-on experience with Hadoop and Spark
Experience with Amazon EMR/EC2 or equivalent cloud technologies
Minimum 2 years of experience with Python
Experience with Bitbucket and solid understanding of Git fundamentals
Familiarity with Linux environments
Experience with Jenkins and CI/CD pipelines
Strong understanding of core computer science fundamentals
Experience with AWS services such as Aurora, Athena, EMR, Redshift, and S3
Experience with Postgres and MySQL databases
Excellent organizational, communication, and project management skills