CrawlJobs Logo

AWS Data Engineer

NorthBay

Location Icon

Location:
United Arab Emirates , Dubai

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We are looking for a skilled AWS Data Engineer to build and support cloud-based data pipelines and analytics platforms on AWS. The role focuses on hands-on development, optimization, and operational support of data processing systems.

Job Responsibility:

  • Develop and maintain scalable batch data pipelines on AWS
  • Implement ETL/ELT jobs using AWS Glue and Spark on EMR
  • Build and manage workflow orchestration using Apache Airflow (MWAA)
  • Develop serverless data processing functions using AWS Lambda
  • Work with Amazon Redshift for data warehousing and analytics
  • Implement data storage and retrieval using Amazon DynamoDB
  • Use AWS DataZone for data discovery, governance, and access control
  • Monitor data pipelines and system health using Amazon CloudWatch
  • Troubleshoot data processing failures and performance issues
  • Collaborate with analytics, reporting, and business teams
  • Follow best practices for security, scalability, and cost optimization

Requirements:

  • 5–8 years of experience in data engineering or related roles
  • Hands-on experience with AWS Glue
  • Hands-on experience with Amazon EMR
  • Hands-on experience with AWS Lambda
  • Hands-on experience with Apache Airflow (MWAA)
  • Hands-on experience with Amazon EC2
  • Hands-on experience with Amazon CloudWatch
  • Hands-on experience with Amazon Redshift
  • Hands-on experience with Amazon DynamoDB
  • Hands-on experience with AWS DataZone
  • Strong SQL and Python skills
  • Experience with batch data processing and data pipeline development
  • Understanding of data warehousing and ETL concepts
  • Experience working in Agile delivery environments

Nice to have:

  • AWS Certification
  • Exposure to data governance and metadata tools
  • Experience with performance tuning and cost optimization on AWS

Additional Information:

Job Posted:
December 26, 2025

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for AWS Data Engineer

New

Senior AWS Data Engineer / Data Platform Engineer

We are seeking a highly experienced Senior AWS Data Engineer to design, build, a...
Location
Location
United Arab Emirates , Dubai
Salary
Salary:
Not provided
NorthBay
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of experience in data engineering and data platform development
  • Strong hands-on experience with: AWS Glue
  • Amazon EMR (Spark)
  • AWS Lambda
  • Apache Airflow (MWAA)
  • Amazon EC2
  • Amazon CloudWatch
  • Amazon Redshift
  • Amazon DynamoDB
  • AWS DataZone
Job Responsibility
Job Responsibility
  • Design, develop, and optimize scalable data pipelines using AWS native services
  • Lead the implementation of batch and near-real-time data processing solutions
  • Architect and manage data ingestion, transformation, and storage layers
  • Build and maintain ETL/ELT workflows using AWS Glue and Apache Spark on EMR
  • Orchestrate complex data workflows using Apache Airflow (MWAA)
  • Develop and manage serverless data processing using AWS Lambda
  • Design and optimize data warehouses using Amazon Redshift
  • Implement and manage NoSQL data models using Amazon DynamoDB
  • Utilize AWS DataZone for data governance, cataloging, and access management
  • Monitor, log, and troubleshoot data pipelines using Amazon CloudWatch
  • Fulltime
Read More
Arrow Right
New

Aws Data Engineer

Collaborate with Product teams ensuring that raw data is cleansed and transforme...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.randstad.com Logo
Randstad
Expiration Date
January 01, 2026
Flip Icon
Requirements
Requirements
  • Expect level PL-SQL
  • Working with AWS cloud infrastructure (specifically SQS, SNS, Redshift, OpenSearch, Athena, Kinesis, AWS code pipeline)
  • Working with a variety of data repository platforms (including SQL stores such as Oracle)
  • Implementing data visualisation and network analysis (e.g GraphdB)
  • Maintain and ‘productionise’ machine learning and AI models
  • Assist in the creation of next generation data ingestion platforms – sourcing data using Webscrapes, APIs, Email, and flat file (FTP) methods
  • Understanding conflict resolution methods
  • Assist subject matter experts in the debugging on data ingestion and managing overall feed uptimes across a large set of data collectors
  • Create and maintain detailed documentation and functional design specifications including data flows and data conversion
  • Provide technical information to assist in the development of client facing product documentation
Job Responsibility
Job Responsibility
  • Collaborate with Product teams ensuring that raw data is cleansed and transformed and useable by downstream consumers (ML Engineers, BI analytics)
  • Assist and advise on the re-development and modernisation of end-to-end ETL pipelines and introduce new technologies where appropriate in a real-time streaming environment dealing with very large data volumes
  • Fulltime
!
Read More
Arrow Right

AWS Data Engineer

AlgebraIT is hiring an AWS Data Engineer in Austin, Texas! If you have at least ...
Location
Location
United States , Austin
Salary
Salary:
Not provided
algebrait.com Logo
AlgebraIT
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience in data engineering with AWS
  • Proficiency in Python, SQL, and big data tools
  • Experience with AWS services such as Lambda and EC2
  • Strong communication and teamwork skills
  • Bachelor’s in Computer Science or similar
Job Responsibility
Job Responsibility
  • Develop and maintain data pipelines using AWS services
  • Automate data ingestion and processing workflows
  • Collaborate with cross-functional teams to ensure robust data solutions
  • Monitor and optimize data pipeline performance
  • Ensure data quality and implement security best practices
  • Integrate data from multiple sources for analytics
  • Implement data validation and error-handling processes
  • Write and maintain technical documentation for data workflows
  • Manage and configure cloud infrastructure related to data pipelines
  • Provide technical support and troubleshooting for data-related issues
  • Fulltime
Read More
Arrow Right

AWS Data Engineer

We are seeking a skilled AWS Data Engineer to join our team and help drive data ...
Location
Location
United States , Charlotte
Salary
Salary:
60.00 USD / Hour
realign-llc.com Logo
Realign
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Information Systems, or related field
  • 3+ years of experience in data engineering, with a focus on AWS cloud services
  • Proficiency in SQL, Python, and AWS data services such as S3, Glue, EMR, and Redshift
  • Experience with ETL processes, data modeling, and data visualization tools
  • Strong analytical and problem-solving skills
  • Excellent communication and teamwork abilities
Job Responsibility
Job Responsibility
  • Design and implement scalable and efficient data pipelines using AWS services such as S3, Glue, EMR, and Redshift
  • Develop and maintain data lakes and data warehouses to store and process large volumes of structured and unstructured data
  • Collaborate with data scientists and business analysts to deliver actionable insights and analytics solutions
  • Optimize data infrastructure for performance, reliability, and cost efficiency
  • Troubleshoot and resolve data integration and data quality issues
  • Stay current with industry trends and best practices in cloud data engineering
  • Provide technical guidance and mentorship to junior team members
Read More
Arrow Right

Senior Data Engineer – Data Engineering & AI Platforms

We are looking for a highly skilled Senior Data Engineer (L2) who can design, bu...
Location
Location
India , Chennai, Madurai, Coimbatore
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong hands-on expertise in cloud ecosystems (Azure / AWS / GCP)
  • Excellent Python programming skills with data engineering libraries and frameworks
  • Advanced SQL capabilities including window functions, CTEs, and performance tuning
  • Solid understanding of distributed processing using Spark/PySpark
  • Experience designing and implementing scalable ETL/ELT workflows
  • Good understanding of data modeling concepts (dimensional, star, snowflake)
  • Familiarity with GenAI/LLM-based integration for data workflows
  • Experience working with Git, CI/CD, and Agile delivery frameworks
  • Strong communication skills for interacting with clients, stakeholders, and internal teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL/ELT pipelines across cloud and big data platforms
  • Contribute to architectural discussions by translating business needs into data solutions spanning ingestion, transformation, and consumption layers
  • Work closely with solutioning and pre-sales teams for technical evaluations and client-facing discussions
  • Lead squads of L0/L1 engineers—ensuring delivery quality, mentoring, and guiding career growth
  • Develop cloud-native data engineering solutions using Python, SQL, PySpark, and modern data frameworks
  • Ensure data reliability, performance, and maintainability across the pipeline lifecycle—from development to deployment
  • Support long-term ODC/T&M projects by demonstrating expertise during technical discussions and interviews
  • Integrate emerging GenAI tools where applicable to enhance data enrichment, automation, and transformations
What we offer
What we offer
  • Opportunity to work at the intersection of Data Engineering, Cloud, and Generative AI
  • Hands-on exposure to modern data stacks and emerging AI technologies
  • Collaboration with experts across Data, AI/ML, and cloud practices
  • Access to structured learning, certifications, and leadership mentoring
  • Competitive compensation with fast-track career growth and visibility
  • Fulltime
Read More
Arrow Right

Data Engineer (AWS)

Fyld is a Portuguese consulting company specializing in IT services. We bring hi...
Location
Location
Portugal , Lisboa
Salary
Salary:
Not provided
https://www.fyld.pt Logo
Fyld
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Software Engineering, Data Engineering, or related
  • Relevant certifications in AWS, such as AWS Certified Solutions Architect, AWS Certified Developer, or AWS Certified Data Analytics
  • Hands-on experience with AWS services, especially those related to Big Data and data analytics, such as Amazon Redshift, Amazon EMR, Amazon Athena, Amazon Kinesis, Amazon Glue, among others
  • Familiarity with data storage and processing services on AWS, including Amazon S3, Amazon RDS, Amazon DynamoDB, and AWS Lambda
  • Proficiency in programming languages such as Python, Scala, or Java for developing data pipelines and automation scripts
  • Knowledge of distributed data processing frameworks, such as Apache Spark or Apache Flink
  • Experience in data modeling, cleansing, transformation, and preparation for analysis
  • Ability to work with different types of data, including structured, unstructured, and semi-structured data
  • Familiarity with data architecture concepts such as data lakes, data warehouses, and data pipelines (not mandatory)
  • Knowledge of security and compliance practices on AWS, including access control, data encryption, and regulatory compliance
  • Fulltime
Read More
Arrow Right
New

Software Engineer (Data Engineering)

We are seeking a Software Engineer (Data Engineering) who can seamlessly integra...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
nstarxinc.com Logo
NStarX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years in Data Engineering and AI/ML roles
  • Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field
  • Python, SQL, Bash, PySpark, Spark SQL, boto3, pandas
  • Apache Spark on EMR (driver/executor model, sizing, dynamic allocation)
  • Amazon S3 (Parquet) with lifecycle management to Glacier
  • AWS Glue Catalog and Crawlers
  • AWS Step Functions, AWS Lambda, Amazon EventBridge
  • CloudWatch Logs and Metrics, Kinesis Data Firehose (or Kafka/MSK)
  • Amazon Redshift and Redshift Spectrum
  • IAM (least privilege), Secrets Manager, SSM
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL and ELT pipelines for large-scale data processing
  • Develop and optimize data architectures supporting analytics and ML workflows
  • Ensure data integrity, security, and compliance with organizational and industry standards
  • Collaborate with DevOps teams to deploy and monitor data pipelines in production environments
  • Build predictive and prescriptive models leveraging AI and ML techniques
  • Develop and deploy machine learning and deep learning models using TensorFlow, PyTorch, or Scikit-learn
  • Perform feature engineering, statistical analysis, and data preprocessing
  • Continuously monitor and optimize models for accuracy and scalability
  • Integrate AI-driven insights into business processes and strategies
  • Serve as the technical liaison between NStarX and client teams
What we offer
What we offer
  • Competitive salary and performance-based incentives
  • Opportunity to work on cutting-edge AI and ML projects
  • Exposure to global clients and international project delivery
  • Continuous learning and professional development opportunities
  • Competitive base + commission
  • Fast growth into leadership roles
  • Fulltime
Read More
Arrow Right

Software Engineer - Data Engineering

Akuna Capital is a leading proprietary trading firm specializing in options mark...
Location
Location
United States , Chicago
Salary
Salary:
130000.00 USD / Year
akunacapital.com Logo
AKUNA CAPITAL
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BS/MS/PhD in Computer Science, Engineering, Physics, Math, or equivalent technical field
  • 5+ years of professional experience developing software applications
  • Java/Scala experience required
  • Highly motivated and willing to take ownership of high-impact projects upon arrival
  • Prior hands-on experience with data platforms and technologies such as Delta Lake, Spark, Kubernetes, Kafka, ClickHouse, and/or Presto/Trino
  • Experience building large-scale batch and streaming pipelines with strict SLA and data quality requirements
  • Must possess excellent communication, analytical, and problem-solving skills
  • Recent hands-on experience with AWS Cloud development, deployment and monitoring necessary
  • Demonstrated experience working on an Agile team employing software engineering best practices, such as GitOps and CI/CD, to deliver complex software projects
  • The ability to react quickly and accurately to rapidly changing market conditions, including the ability to quickly and accurately respond and/or solve math and coding problems are essential functions of the role
Job Responsibility
Job Responsibility
  • Work within a growing Data Engineering division supporting the strategic role of data at Akuna
  • Drive the ongoing design and expansion of our data platform across a wide variety of data sources, supporting an array of streaming, operational and research workflows
  • Work closely with Trading, Quant, Technology & Business Operations teams throughout the firm to identify how data is produced and consumed, helping to define and deliver high impact projects
  • Build and deploy batch and streaming pipelines to collect and transform our rapidly growing Big Data set within our hybrid cloud architecture utilizing Kubernetes/EKS, Kafka/MSK and Databricks/Spark
  • Mentor junior engineers in software and data engineering best practices
  • Produce clean, well-tested, and documented code with a clear design to support mission critical applications
  • Build automated data validation test suites that ensure that data is processed and published in accordance with well-defined Service Level Agreements (SLA’s) pertaining to data quality, data availability and data correctness
  • Challenge the status quo and help push our organization forward, as we grow beyond the limits of our current tech stack
What we offer
What we offer
  • Discretionary performance bonus
  • Comprehensive benefits package that may encompass employer-paid medical, dental, vision, retirement contributions, paid time off, and other benefits
  • Fulltime
Read More
Arrow Right
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.