This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
The Data Engineer (AWS) role involves designing and implementing data solutions tailored to customer needs, utilizing AWS and programming languages like Python, Java, and Scala. Candidates should have a minimum of 4 years of experience in data engineering or analytics, with a strong understanding of data integration technologies. A degree is preferred, and the role requires collaboration across various technical stacks.
Job Responsibility:
Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack
Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure
Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations
Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS
Develop and deliver detailed presentations to effectively communicate complex technical concepts
Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc
Adhere to Agile practices throughout the solution development process
Design, build, and deploy databases and data stores to support organizational requirements
Requirements:
Minimum 4 years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects specifically with AWS
Minimum 4 years of experience on at least one of the following: PySpark or Scala or Data lake house solutions on AWS
Minimum 2 years of experience leading a team supporting data related projects to develop end-to-end technical solutions
Ability to travel at least 25%
Undergraduate or Graduate degree preferred
Demonstrate production experience in other core data platforms such as Snowflake, Databricks, Azure, GCP, Hadoop, and more
Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems
Exhibit a strong understanding of Data integration technologies, encompassing Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc
Showcase professional written and verbal communication skills
Nice to have:
Demonstrate production experience in other core data platforms such as Snowflake, Databricks, Azure, GCP, Hadoop, and more
Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems
Exhibit a strong understanding of Data integration technologies, encompassing Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc
Showcase professional written and verbal communication skills