CrawlJobs Logo

Mid-Level Data Engineer

parserdigital.com Logo

Parser Limited

Location Icon

Location:
United Arab Emirates , Dubai

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

40000.00 / Year

Job Description:

We are seeking a Professional Data Engineer to join our dynamic team, where you will play a crucial role in developing and maintaining robust data solutions. As a Professional Data Engineer, you will collaborate with data science, business analytics, and product development teams to deploy cutting-edge techniques and utilise best-in-class third-party products. The Data team operates with engineering precision, prioritising security, privacy, and regulatory compliance in every initiative. As a Professional Data Engineer, you will contribute to the team's commitment to utilising the latest tools and methodologies, ensuring that our data solutions align with industry best practices.

Job Responsibility:

  • Develop and maintain ETL pipelines using SQL and/or Python
  • Use tools like Dagster/Airflow for pipeline orchestration
  • Collaborate with cross-functional teams to understand and deliver data requirements
  • Ensure a consistent flow of high-quality data using stream, batch, and CDC processes
  • Use data transformation tools like DBT to prepare datasets to enable business users to self-service
  • Ensure data quality and consistency in all data stores
  • Monitor and troubleshoot data pipelines for performance and reliability

Requirements:

  • 3+ years of experience as a data engineer
  • Proficiency in SQL is a must
  • Experience with modern cloud data warehousing, data lake solutions like Snowflake, BigQuery, Redshift, Azure Synapse
  • Experience with ETL/ELT, batch, streaming data processing pipelines
  • Excellent ability to investigate and troubleshoot data issues, providing fixes and proposing both short and long-term solutions
  • Knowledge of AWS services (like S3, DMS, Glue, Athena, etc.)
  • Familiar with DBT or other data transformation tools
  • Familiarity with GenAI, and how to leverage LLMs to resolve engineering challenges

Nice to have:

  • Experience with AWS services and concepts (like EC2, ECS, EKS, VPC, IAM, etc)
  • Familiar with Terraform and Terragrunt
  • Experience with Python
  • Experience with orchestration tools like Dagster, Airflow, AWS Step functions, etc
  • Experience with pub-sub, queuing, and streaming frameworks such as AWS Kinesis, Kafka, SQS, SNS
  • Familiar with CI/CD pipelines and automation

Additional Information:

Job Posted:
January 10, 2026

Employment Type:
Fulltime
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Mid-Level Data Engineer

Mid-level Software Security Engineer

We are looking for a Mid-level Software Security Engineer to join our AI & Autom...
Location
Location
Salary
Salary:
Not provided
https://www.randstad.com Logo
Randstad
Expiration Date
February 17, 2026
Flip Icon
Requirements
Requirements
  • Experienced developer with at least 3-5 years of experience
  • Good knowledge of scripting/programming with Python
  • Strong skills in scripting and programming (with a focus on Python)
  • Good foundation in cybersecurity
Job Responsibility
Job Responsibility
  • Work on automating the collection and analysis of Threat Intelligence data
  • Develop integrations with AI models
  • Build tools to enhance vulnerability management and security research processes
  • Collaborate closely with team members to design scalable automation solutions
  • Streamline workflows
  • Contribute to advancing AI-driven security initiatives
Read More
Arrow Right

Mid-Level Project Manager

Mid-Level Project Managers exhibit a strong understanding of product manufacturi...
Location
Location
United States , Akron
Salary
Salary:
65000.00 - 85000.00 USD / Year
onqsolutions.com Logo
OnQ
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2-3 years of proven experience in project management, preferably in the design, engineering or retail sectors
  • Experience managing the lifecycle of projects
  • Experience managing the manufacturing/assembly of products
  • Understanding of common product warehousing, inventory and shipping practices
  • Working knowledge of QA and QC practices common to product manufacturing
  • Excellent client-facing and internal communication skills
  • Excellent written and verbal communication skills
  • Solid organizational skills, including attention to detail and multi-tasking skills
  • Strong working knowledge of Microsoft Office and of project management methodologies
  • College degree or equivalent work experience
Job Responsibility
Job Responsibility
  • Coordinate internal resources and third parties towards the flawless execution of projects
  • Ensure that all projects assigned are delivered on-time, within scope and on budget
  • Assist in the definition of project scope, involving all relevant stakeholders and ensuring technical feasibility
  • Manage project costs, set budgets and handle project billings
  • Attend stakeholder meetings to confirm project objectives, execution obstacles, in-store variables and other factors that will impact program success
  • Ability to act as the primary POC between OnQ and various stakeholders to drive necessary research and data collection, manage client expectations and ensure key timelines are met
  • Analyze retailer data and category/fixture layout to translate into internal production and materials orders
  • Ensure resource availability and allocation to meet project requirements (both internal and external)
  • Develop detailed project plans to monitor and track progress
  • Manage changes to the project scope, project schedule and project costs using appropriate verification techniques
  • Fulltime
Read More
Arrow Right

Data Engineer

Barbaricum is seeking a Data Engineer to provide support an emerging capability ...
Location
Location
United States , Omaha
Salary
Salary:
Not provided
barbaricum.com Logo
Barbaricum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Active DoD Top Secret/SCI clearance required
  • 8+ years of demonstrated experience in software engineering
  • Bachelor’s degree in computer science or a related field
  • 8+ years of experience working with AWS big data technologies (S3, EC2) and demonstrate experience in distributed data processing, Data Modeling, ETL Development, and/or Data Warehousing
  • Demonstrated mid-level knowledge of software engineering best practices across the development lifecycle
  • 3+ years of experience using analytical concepts and statistical techniques
  • 8+ years of demonstrated experience across Mathematics, Applied Mathematics, Statistics, Applied Statistics, Machine Learning, Data Science, Operations Research, or Computer Science especially around software engineering and/or designing/implementing machine learning, data mining, advanced analytical algorithms, programming, data science, advanced statistical analysis, artificial intelligence
Job Responsibility
Job Responsibility
  • Design, implement, and operate data management systems for intelligence needs
  • Use Python to automate data workflows
  • Design algorithms databases, and pipelines to access, and optimize data retrieval, storage, use, integration and management by different data regimes and digital systems
  • Work with data users to determine, create, and populate optimal data architectures, structures, and systems
  • and plan, design, and optimize data throughput and query performance
  • Participate in the selection of backend database technologies (e.g. SQL, NoSQL, etc.), its configuration and utilization, and the optimization of the full data pipeline infrastructure to support the actual content, volume, ETL, and periodicity of data to support the intended kinds of queries and analysis to match expected responsiveness
  • Assist and advise the Government with developing, constructing, and maintaining data architectures
  • Research, study, and present technical information, in the form of briefings or written papers, on relevant data engineering methodologies and technologies of interest to or as requested by the Government
  • Align data architecture, acquisition, and processes with intelligence and analytic requirements
  • Prepare data for predictive and prescriptive modeling deploying analytics programs, machine learning and statistical methods to find hidden patterns, discover tasks and processes which can be automated and make recommendations to streamline data processes and visualizations
Read More
Arrow Right

Data Engineer - II

The Data Engineer will design, develop, and maintain scalable data pipelines and...
Location
Location
India , Pune
Salary
Salary:
Not provided
aticaglobal.com Logo
Atica Global
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in computer science, Engineering, Mathematics, a related field, or equivalent practical experience
  • 3-5 years of experience in data engineering or a similar mid-level role
  • Proficiency in Python and SQL
  • experience with Java is a plus
  • Hands-on experience with AWS, Airbyte, DBT, PostgreSQL, MongoDB, Airflow, and Spark
  • Familiarity with data storage solutions such as PostgreSQL, MongoDB
  • Experience with BigQuery (setup, management and scaling)
  • Strong understanding of data modeling, ETL/ELT processes, and database systems
  • Experience with data extraction, batch processing and data warehousing
  • Excellent problem-solving skills and a keen attention to detail
Job Responsibility
Job Responsibility
  • Design, develop, and maintain scalable data pipelines and ETL/ELT processes using tools like Airflow, Airbyte and PySpark
  • Collaborate with software engineers and analysts to ensure data availability and integrity for various applications
  • Design and implement robust data pipelines to extract, transform, and load (ETL) data from various sources
  • Utilize Airflow for orchestrating complex workflows and managing data pipelines
  • Implement batch processing techniques using Airflow/PySpark to handle large volumes of data efficiently
  • Develop ELT processes to optimize data extraction and transformation within the target data warehouse
  • Leverage AWS services (e.g., S3, RDS, Lambda) for data storage, processing, and orchestration
  • Ensure data security, reliability, and performance when utilizing AWS resources
  • Work closely with developers, analysts, and other stakeholders to understand data requirements and provide the necessary data infrastructure
  • Assist in troubleshooting and optimizing existing data workflows and queries
What we offer
What we offer
  • Competitive salary and benefits package
  • Comprehensive Health Care benefits (best in the country, includes IPD+OPD, covers Employee, Spouse and two children)
  • Growth and advancement opportunities within a rapidly expanding company
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We’re hiring a Senior Data Engineer with strong experience in AWS and Databricks...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
appen.com Logo
Appen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5-7 years of hands-on experience with AWS data engineering technologies, such as Amazon Redshift, AWS Glue, AWS Data Pipeline, Amazon Kinesis, Amazon RDS, and Apache Airflow
  • Hands-on experience working with Databricks, including Delta Lake, Apache Spark (Python or Scala), and Unity Catalog
  • Demonstrated proficiency in SQL and NoSQL databases, ETL tools, and data pipeline workflows
  • Experience with Python, and/or Java
  • Deep understanding of data structures, data modeling, and software architecture
  • Strong problem-solving skills and attention to detail
  • Self-motivated and able to work independently, with excellent organizational and multitasking skills
  • Exceptional communication skills, with the ability to explain complex data concepts to non-technical stakeholders
  • Bachelor's Degree in Computer Science, Information Systems, or a related field. A Master's Degree is preferred.
Job Responsibility
Job Responsibility
  • Design, build, and manage large-scale data infrastructures using a variety of AWS technologies such as Amazon Redshift, AWS Glue, Amazon Athena, AWS Data Pipeline, Amazon Kinesis, Amazon EMR, and Amazon RDS
  • Design, develop, and maintain scalable data pipelines and architectures on Databricks using tools such as Delta Lake, Unity Catalog, and Apache Spark (Python or Scala), or similar technologies
  • Integrate Databricks with cloud platforms like AWS to ensure smooth and secure data flow across systems
  • Build and automate CI/CD pipelines for deploying, testing, and monitoring Databricks workflows and data jobs
  • Continuously optimize data workflows for performance, reliability, and security, applying Databricks best practices around data governance and quality
  • Ensure the performance, availability, and security of datasets across the organization, utilizing AWS’s robust suite of tools for data management
  • Collaborate with data scientists, software engineers, product managers, and other key stakeholders to develop data-driven solutions and models
  • Translate complex functional and technical requirements into detailed design proposals and implement them
  • Mentor junior and mid-level data engineers, fostering a culture of continuous learning and improvement within the team
  • Identify, troubleshoot, and resolve complex data-related issues
  • Fulltime
Read More
Arrow Right

Senior Big Data Engineer

The Big Data Engineer is a senior level position responsible for establishing an...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience in Big Data Engineering (PySpark)
  • Data Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from multiple sources
  • Big Data Infrastructure: Develop and manage large-scale data processing systems using frameworks like Apache Spark, Hadoop, and Kafka
  • Proficiency in programming languages like Python, or Scala
  • Strong expertise in data processing frameworks such as Apache Spark, Hadoop
  • Expertise in Data Lakehouse technologies (Apache Iceberg, Apache Hudi, Trino)
  • Experience with cloud data platforms like AWS (Glue, EMR, Redshift), Azure (Synapse), or GCP (BigQuery)
  • Expertise in SQL and database technologies (e.g., Oracle, PostgreSQL, etc.)
  • Experience with data orchestration tools like Apache Airflow or Prefect
  • Familiarity with containerization (Docker, Kubernetes) is a plus
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets
  • Fulltime
Read More
Arrow Right

Senior Big Data Engineer

The Big Data Engineer is a senior level position responsible for establishing an...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience in Big Data Engineering (PySpark)
  • Data Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from multiple sources
  • Big Data Infrastructure: Develop and manage large-scale data processing systems using frameworks like Apache Spark, Hadoop, and Kafka
  • Proficiency in programming languages like Python, or Scala
  • Strong expertise in data processing frameworks such as Apache Spark, Hadoop
  • Expertise in Data Lakehouse technologies (Apache Iceberg, Apache Hudi, Trino)
  • Experience with cloud data platforms like AWS (Glue, EMR, Redshift), Azure (Synapse), or GCP (BigQuery)
  • Expertise in SQL and database technologies (e.g., Oracle, PostgreSQL, etc.)
  • Experience with data orchestration tools like Apache Airflow or Prefect
  • Familiarity with containerization (Docker, Kubernetes) is a plus
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency
What we offer
What we offer
  • Well-being support
  • Growth opportunities
  • Work-life balance support
  • Fulltime
Read More
Arrow Right

Big Data / Scala / Python Engineering Lead

The Applications Development Technology Lead Analyst is a senior level position ...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least two years (Over all 10+ hands on Data Engineering experience) of experience building and leading highly complex, technical data engineering teams
  • Lead data engineering team, from sourcing to closing
  • Drive strategic vision for the team and product
  • Experience managing an data focused product, ML platform
  • Hands on experience relevant experience in design, develop, and optimize scalable distributed data processing pipelines using Apache Spark and Scala
  • Experience managing, hiring and coaching software engineering teams
  • Experience with large-scale distributed web services and the processes around testing, monitoring, and SLAs to ensure high product quality
  • 7 to 10+ years of hands-on experience in big data development, focusing on Apache Spark, Scala, and distributed systems
  • Proficiency in Functional Programming: High proficiency in Scala-based functional programming for developing robust and efficient data processing pipelines
  • Proficiency in Big Data Technologies: Strong experience with Apache Spark, Hadoop ecosystem tools such as Hive, HDFS, and YARN
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Fulltime
Read More
Arrow Right