CrawlJobs Logo

Amazon S3 Engineer

realign-llc.com Logo

Realign

Location Icon

Location:
United States , Charlotte, NC / Plano, TX

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

125000.00 USD / Year

Job Description:

Role: Amazon S3 Engineer. FTE only.

Job Responsibility:

  • Design, develop, and execute Data Pipelines and test cases to ensure data integrity and quality
  • Develop, implement, and optimize data pipelines that integrate Amazon S3 for scalable data storage, retrieval, and processing within ETL workflows
  • Leverage Amazon S3 for data storage, retrieval, and management within ETL workflows, including the ability to write scripts for data transfer between S3 and other systems
  • Utilize Amazon S3's advanced features such as versioning, lifecycle policies, access controls, and server-side encryption to ensure secure and efficient data management
  • Write, maintain, and troubleshoot scripts or code (using PySpark, Shell, or similar languages) to automate data movement between Amazon S3 and other platforms, ensuring high performance and reliability
  • Collaborate with cross-functional teams to troubleshoot and resolve data-related issues, utilizing Amazon S3 features such as versioning, lifecycle policies, and access management
  • Document ETL processes, maintain technical documentation, and ensure best practices are followed for data stored in Amazon S3 environments
  • Validate HiveQL, HDFS file structures, and data processing within the Hadoop cluster
  • Knowledge in Metadata dependent ETL process and batch/job framework

Requirements:

  • Amazon Data Engineer
  • AWS Data Engineer
  • Amazon S3
  • Shell Scripting
  • Autosys
  • Minimum 10 years experience
  • PySpark
  • SQL
  • Oracle
  • Banking knowledge
  • Payment’s knowledge preferred
  • Cloudera Platform
  • Cloud Storage
  • AWS
  • Data Warehousing
  • Data Transformation
  • ETL/ELT
  • Data Quality

Nice to have:

Familiarity with Hadoop or Spark

Additional Information:

Job Posted:
March 21, 2026

Employment Type:
Fulltime
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Amazon S3 Engineer

New

Amazon S3 Engineer

Role: Amazon S3 Engineer. Location: Charlotte, NC / Plano, TX. FTE only.
Location
Location
United States , Charlotte, NC / Plano, TX
Salary
Salary:
125000.00 USD / Year
realign-llc.com Logo
Realign
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Amazon Data Engineer
  • AWS Data Engineer
  • Amazon S3
  • Shell Scripting
  • Autosys
  • Minimum 10 years experience
  • PySpark
  • SQL
  • Oracle
  • Banking knowledge
Job Responsibility
Job Responsibility
  • Design, develop, and execute Data Pipelines and test cases to ensure data integrity and quality
  • Develop, implement, and optimize data pipelines that integrate Amazon S3 for scalable data storage, retrieval, and processing within ETL workflows
  • Leverage Amazon S3 for data storage, retrieval, and management within ETL workflows, including the ability to write scripts for data transfer between S3 and other systems
  • Utilize Amazon S3's advanced features such as versioning, lifecycle policies, access controls, and server-side encryption to ensure secure and efficient data management
  • Write, maintain, and troubleshoot scripts or code (using PySpark, Shell, or similar languages) to automate data movement between Amazon S3 and other platforms, ensuring high performance and reliability
  • Collaborate with cross-functional teams to troubleshoot and resolve data-related issues, utilizing Amazon S3 features such as versioning, lifecycle policies, and access management
  • Document ETL processes, maintain technical documentation, and ensure best practices are followed for data stored in Amazon S3 environments
  • Validate HiveQL, HDFS file structures, and data processing within the Hadoop cluster
  • Knowledge in Metadata dependent ETL process and batch/job framework
  • Fulltime
Read More
Arrow Right

Senior Data Architect

We are seeking a highly experienced Senior Data Architect with 12+ years of expe...
Location
Location
United Arab Emirates , Dubai
Salary
Salary:
Not provided
northbaysolutions.com Logo
NorthBay
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 12+ years of experience in Data Engineering and Data Architecture
  • Proven experience working as a Data Architect on large-scale AWS platforms
  • Strong experience designing enterprise data lakes and data warehouses
  • Hands-on experience with batch data processing and orchestration frameworks
  • Excellent communication and stakeholder management skills
  • Ability to work onsite in Dubai, UAE
  • AWS Glue (ETL, Data Catalog)
  • Amazon EMR (Batch Processing)
  • AWS Lambda (Serverless Data Processing)
  • Amazon MWAA (Apache Airflow)
Job Responsibility
Job Responsibility
  • Design and own end-to-end AWS data architecture for enterprise platforms
  • Define data architecture standards, best practices, and reference models
  • Architect batch and event-driven data pipelines using AWS native services
  • Lead data ingestion, transformation, and orchestration workflows
  • Design and implement solutions using AWS Glue, EMR, Lambda, and MWAA (Airflow)
  • Architect data lakes and data warehouses using Amazon S3 and Amazon Redshift
  • Design NoSQL data solutions using Amazon DynamoDB
  • Implement data governance, metadata management, and access control using AWS DataZone
  • Ensure monitoring, logging, and observability using Amazon CloudWatch
  • Partner with engineering, analytics, and business teams to translate requirements into scalable data solutions
  • Fulltime
Read More
Arrow Right

Data Architect

We are seeking a highly experienced Data Architect with 12+ years of experience ...
Location
Location
United Arab Emirates , Dubai
Salary
Salary:
Not provided
northbaysolutions.com Logo
NorthBay
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 12+ years of experience in Data Engineering and Data Architecture
  • Proven experience working as a Data Architect on large-scale AWS platforms
  • Strong experience designing enterprise data lakes and data warehouses
  • Hands-on experience with batch data processing and orchestration frameworks
  • Excellent communication and stakeholder management skills
  • Ability to work onsite in Dubai, UAE
  • AWS Glue (ETL, Data Catalog)
  • Amazon EMR (Batch Processing)
  • AWS Lambda (Serverless Data Processing)
  • Amazon MWAA (Apache Airflow)
Job Responsibility
Job Responsibility
  • Design and own end-to-end AWS data architecture for enterprise platforms
  • Define data architecture standards, best practices, and reference models
  • Architect batch and event-driven data pipelines using AWS native services
  • Lead data ingestion, transformation, and orchestration workflows
  • Design and implement solutions using AWS Glue, EMR, Lambda, and MWAA (Airflow)
  • Architect data lakes and data warehouses using Amazon S3 and Amazon Redshift
  • Design NoSQL data solutions using Amazon DynamoDB
  • Implement data governance, metadata management, and access control using AWS DataZone
  • Ensure monitoring, logging, and observability using Amazon CloudWatch
  • Partner with engineering, analytics, and business teams to translate requirements into scalable data solutions
  • Fulltime
Read More
Arrow Right

Data Architect

We are seeking a highly experienced Data Architect with 14–16+ years of expertis...
Location
Location
United Arab Emirates , Dubai
Salary
Salary:
Not provided
northbaysolutions.com Logo
NorthBay
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 14–16+ years of experience in Data Engineering and Data Architecture
  • Proven experience working as a Data Architect on large-scale AWS platforms
  • Strong experience designing enterprise data lakes and data warehouses
  • Hands-on experience with batch data processing and orchestration frameworks
  • Excellent communication and stakeholder management skills
  • Ability and willingness to work onsite in Dubai, UAE (relocation required if outside UAE)
  • AWS Glue (ETL, Data Catalog)
  • Amazon EMR (Batch Processing)
  • AWS Lambda (Serverless Data Processing)
  • Amazon MWAA (Apache Airflow)
Job Responsibility
Job Responsibility
  • Design and own end-to-end AWS data architecture for enterprise platforms
  • Define data architecture standards, best practices, and reference models
  • Architect batch and event-driven data pipelines using AWS native services
  • Lead data ingestion, transformation, and orchestration workflows
  • Design and implement solutions using AWS Glue, EMR, Lambda, and MWAA (Airflow)
  • Architect data lakes and data warehouses using Amazon S3 and Amazon Redshift
  • Design NoSQL data solutions using Amazon DynamoDB
  • Implement data governance, metadata management, and access control using AWS DataZone
  • Ensure monitoring, logging, and observability using Amazon CloudWatch
  • Partner with engineering, analytics, and business teams to translate requirements into scalable data solutions
  • Fulltime
Read More
Arrow Right

Data Engineer (AWS)

Fyld is a Portuguese consulting company specializing in IT services. We bring hi...
Location
Location
Portugal , Lisboa
Salary
Salary:
Not provided
https://www.fyld.pt Logo
Fyld
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Software Engineering, Data Engineering, or related
  • Relevant certifications in AWS, such as AWS Certified Solutions Architect, AWS Certified Developer, or AWS Certified Data Analytics
  • Hands-on experience with AWS services, especially those related to Big Data and data analytics, such as Amazon Redshift, Amazon EMR, Amazon Athena, Amazon Kinesis, Amazon Glue, among others
  • Familiarity with data storage and processing services on AWS, including Amazon S3, Amazon RDS, Amazon DynamoDB, and AWS Lambda
  • Proficiency in programming languages such as Python, Scala, or Java for developing data pipelines and automation scripts
  • Knowledge of distributed data processing frameworks, such as Apache Spark or Apache Flink
  • Experience in data modeling, cleansing, transformation, and preparation for analysis
  • Ability to work with different types of data, including structured, unstructured, and semi-structured data
  • Familiarity with data architecture concepts such as data lakes, data warehouses, and data pipelines (not mandatory)
  • Knowledge of security and compliance practices on AWS, including access control, data encryption, and regulatory compliance
  • Fulltime
Read More
Arrow Right

Data & AI Engineer

We are looking for a Data & AI Engineer, to design modern data platforms and con...
Location
Location
United States , Cambridge
Salary
Salary:
58.00 - 79.00 USD / Hour
amaris.com Logo
Amaris Consulting
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years of experience in data engineering or cloud data platforms
  • Strong hands-on experience with Amazon S3 and Amazon Athena
  • Excellent SQL skills for analytical querying and optimization
  • Solid experience with AWS services: Glue, Lambda, IAM, CloudWatch
  • Experience provisioning cloud infrastructure and setting up CI/CD pipelines (GitHub Actions)
  • Strong knowledge of data formats (Parquet, ORC, JSON, CSV)
  • Programming experience in Python, PySpark, or similar
  • Experience designing and operating data lakes or analytics platforms
  • Good understanding of data lifecycle management
  • Familiarity with BI tools and analytics consumption patterns
Job Responsibility
Job Responsibility
  • Design and implement scalable data lake architectures using Amazon S3
  • Build and optimize serverless analytics solutions with Amazon Athena
  • Define efficient data models, partitioning strategies, and metadata management
  • Develop robust ETL / ELT pipelines using AWS Glue, Lambda, and Step Functions
  • Ingest structured and semi-structured data from internal systems, APIs, and external sources
  • Ensure data quality, validation, and schema evolution across pipelines
  • Implement data governance with AWS Lake Formation and IAM
  • Manage encryption, access control, and auditability for sensitive datasets
  • Monitor and optimize performance and cloud costs using CloudWatch and AWS cost tools
  • Enable self-service data access for analytics, BI, and data science teams
What we offer
What we offer
  • An international community bringing together more than 110 different nationalities
  • An environment where trust is central: 70% of our leaders started their careers at the entry level
  • A strong training system with our internal Academy and more than 250 modules available
  • A dynamic work environment that frequently comes together for internal events (afterworks, team buildings, etc.)
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Location
Location
Salary
Salary:
Not provided
kloud9.nyc Logo
Kloud9
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in developing scalable Big Data applications or solutions on distributed platforms
  • 4+ years of experience working with distributed technology tools, including Spark, Python, Scala
  • Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture
  • Proficient in working on Amazon Web Services(AWS) mainly S3, Managed Airflow, EMR/ EC2, IAM etc.
  • Experience working in Agile and Scrum development process
  • 3+ years of experience in Amazon Web Services (AWS) mainly S3, Managed Airflow, EMR/ EC2, IAM etc.
  • Experience architecting data product in Streaming, Serverless and Microservices Architecture and platform
  • 3+ years of experience working with Data platforms, including EMR, Airflow, Databricks (Data Engineering & Delta)
  • Experience with creating/configuring Jenkins pipeline for smooth CI/CD process for Managed Spark jobs, build Docker images, etc.
  • Working knowledge of Reporting & Analytical tools such as Tableau, Quicksite etc.
Job Responsibility
Job Responsibility
  • Design and develop scalable Big Data applications on distributed platforms to support large-scale data processing and analytics needs
  • Partner with others in solving complex problems by taking a broad perspective to identify innovative solutions
  • Build positive relationships across Product and Engineering
  • Influence and communicate effectively, both verbally and written, with team members and business stakeholders
  • Quickly pick up new programming languages, technologies, and frameworks
  • Collaborate effectively in a high-speed, results-driven work environment to meet project deadlines and business goals
  • Utilize Data Warehousing tools such as SQL databases, Presto, and Snowflake for efficient data storage, querying, and analysis
  • Demonstrate experience in learning new technologies and skills.
What we offer
What we offer
  • Kloud9 provides a robust compensation package and a forward-looking opportunity for growth in emerging fields.
Read More
Arrow Right

Senior Data Engineer

We are looking for a highly skilled Senior Data Engineer to join our team on a l...
Location
Location
United States , Dallas
Salary
Salary:
Not provided
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Engineering, or a related discipline
  • At least 7 years of experience in data engineering
  • Strong background in designing and managing data pipelines
  • Proficiency in tools such as Apache Kafka, Airflow, NiFi, Databricks, Spark, Hadoop, Flink, and Amazon S3
  • Expertise in programming languages like Python, Scala, or Java for data processing and automation
  • Strong knowledge of both relational and NoSQL databases
  • Experience with Kubernetes-based data engineering and hybrid cloud environments
  • Familiarity with data modeling principles, governance frameworks, and quality assurance processes
  • Excellent problem-solving, analytical, and communication skills
Job Responsibility
Job Responsibility
  • Design and implement robust data pipelines and architectures to support data-driven decision-making
  • Develop and maintain scalable data pipelines using tools like Apache Airflow, NiFi, and Databricks
  • Implement and manage real-time data streaming solutions utilizing Apache Kafka and Flink
  • Optimize and oversee data storage systems with technologies such as Hadoop and Amazon S3
  • Establish and enforce data governance, quality, and security protocols
  • Manage complex workflows and processes across hybrid and multi-cloud environments
  • Work with diverse data formats, including Parquet and Avro
  • Troubleshoot and fine-tune distributed data systems
  • Mentor and guide engineers at the beginning of their careers
What we offer
What we offer
  • Medical, vision, dental, and life and disability insurance
  • 401(k) plan
  • Free online training
  • Fulltime
Read More
Arrow Right