CrawlJobs Logo

Mid Data Engineer

enroutesystems.com Logo

Enroute

Location Icon

Location:
Mexico , Dinastía, Nuevo León

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We are looking for a Mid/Senior Data Engineer to join our team and support the design, development, and optimization of our data platform. This role will focus heavily on building and maintaining reverse ETL workflows to operationalize warehouse data across downstream systems. You will work closely with Analytics, Product, and Engineering teams to ensure that high-quality, well-modeled data is not only available for reporting, but also activated in business and operational tools.

Job Responsibility:

  • Design, build, and maintain scalable ETL/ELT pipelines
  • Implement and manage reverse ETL workflows that sync data from the data warehouse (e.g., Snowflake) into operational systems (CRMs, marketing tools, internal applications, etc.)
  • Optimize data models to support both analytics and activation use cases
  • Ensure data quality, validation, and monitoring across pipelines
  • Collaborate with cross-functional teams to translate business requirements into reliable data solutions
  • Support performance tuning and cost optimization of warehouse workloads
  • Maintain documentation and best practices across data workflows

Requirements:

  • 3–5+ years (Mid-level) or 5–8+ years (Senior-level) experience in Data Engineering
  • Strong hands-on experience with SQL and data modeling
  • Proven past experience implementing reverse ETL solutions (hands-on ownership, not just exposure)
  • Experience working with modern data warehouses (Snowflake preferred)
  • Experience building ETL/ELT pipelines using Python and/or SQL-based tools
  • Experience with orchestration tools (Airflow, dbt, or similar)
  • Experience working in cloud environments (AWS, GCP, or Azure)
  • Strong understanding of data quality and monitoring practices
What we offer:
  • Monetary compensation
  • Year-end Bonus
  • IMSS, AFORE, INFONAVIT
  • Major Medical Expenses Insurance
  • Minor Medical Expenses Insurance
  • Life Insurance
  • Funeral Expenses Insurance
  • Preferential rates for car insurance
  • TDU Membership
  • Holidays and Vacations
  • Sick days
  • Bereavement days
  • Civil Marriage days
  • Maternity & Paternity leave
  • English and Spanish classes
  • Performance Management Framework
  • Certifications
  • TALISIS Agreement: Discounts at ADVENIO, Harmon Hall, U-ERRE, UNID
  • Taquitos Rewards
  • Amazon Gift Card on your Birthday
  • Work-from-home Bonus
  • Laptop Policy

Additional Information:

Job Posted:
February 18, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Mid Data Engineer

Data Engineer

Local only. Job consists of setting up Change Data Capture (or CDC) for multiple...
Location
Location
United States , Plano
Salary
Salary:
Not provided
enormousenterprise.com Logo
Enormous Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Java – Mid to Senior level experience
  • Python – Mid level experience (pyspark)
  • Apache Spark – Data Frames, Spark SQL, Spark Streaming and ETL pipelines
  • Apache Airflow
  • Extensive knowledge with S3 and S3 operations (CRUD)
  • EMR & EMR Serverless
  • Glue Data Catalog
  • Step Functions
  • MWAA (Managed Workflows Apache Airflow)
  • Lambdas (Python)
Job Responsibility
Job Responsibility
  • setting up Change Data Capture (or CDC) for multiple types of databases for the purpose of hydrating a data lake
  • orchestrate raw CDC data and transform it into useable and query-able data for analytics
Read More
Arrow Right

Senior Big Data Engineer

The Big Data Engineer is a senior level position responsible for establishing an...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience in Big Data Engineering (PySpark)
  • Data Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from multiple sources
  • Big Data Infrastructure: Develop and manage large-scale data processing systems using frameworks like Apache Spark, Hadoop, and Kafka
  • Proficiency in programming languages like Python, or Scala
  • Strong expertise in data processing frameworks such as Apache Spark, Hadoop
  • Expertise in Data Lakehouse technologies (Apache Iceberg, Apache Hudi, Trino)
  • Experience with cloud data platforms like AWS (Glue, EMR, Redshift), Azure (Synapse), or GCP (BigQuery)
  • Expertise in SQL and database technologies (e.g., Oracle, PostgreSQL, etc.)
  • Experience with data orchestration tools like Apache Airflow or Prefect
  • Familiarity with containerization (Docker, Kubernetes) is a plus
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets
  • Fulltime
Read More
Arrow Right

Senior Big Data Engineer

The Big Data Engineer is a senior level position responsible for establishing an...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience in Big Data Engineering (PySpark)
  • Data Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from multiple sources
  • Big Data Infrastructure: Develop and manage large-scale data processing systems using frameworks like Apache Spark, Hadoop, and Kafka
  • Proficiency in programming languages like Python, or Scala
  • Strong expertise in data processing frameworks such as Apache Spark, Hadoop
  • Expertise in Data Lakehouse technologies (Apache Iceberg, Apache Hudi, Trino)
  • Experience with cloud data platforms like AWS (Glue, EMR, Redshift), Azure (Synapse), or GCP (BigQuery)
  • Expertise in SQL and database technologies (e.g., Oracle, PostgreSQL, etc.)
  • Experience with data orchestration tools like Apache Airflow or Prefect
  • Familiarity with containerization (Docker, Kubernetes) is a plus
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency
What we offer
What we offer
  • Well-being support
  • Growth opportunities
  • Work-life balance support
  • Fulltime
Read More
Arrow Right

Data Engineer

Barbaricum is seeking a Data Engineer to provide support an emerging capability ...
Location
Location
United States , Omaha
Salary
Salary:
Not provided
barbaricum.com Logo
Barbaricum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Active DoD Top Secret/SCI clearance required
  • 8+ years of demonstrated experience in software engineering
  • Bachelor’s degree in computer science or a related field
  • 8+ years of experience working with AWS big data technologies (S3, EC2) and demonstrate experience in distributed data processing, Data Modeling, ETL Development, and/or Data Warehousing
  • Demonstrated mid-level knowledge of software engineering best practices across the development lifecycle
  • 3+ years of experience using analytical concepts and statistical techniques
  • 8+ years of demonstrated experience across Mathematics, Applied Mathematics, Statistics, Applied Statistics, Machine Learning, Data Science, Operations Research, or Computer Science especially around software engineering and/or designing/implementing machine learning, data mining, advanced analytical algorithms, programming, data science, advanced statistical analysis, artificial intelligence
Job Responsibility
Job Responsibility
  • Design, implement, and operate data management systems for intelligence needs
  • Use Python to automate data workflows
  • Design algorithms databases, and pipelines to access, and optimize data retrieval, storage, use, integration and management by different data regimes and digital systems
  • Work with data users to determine, create, and populate optimal data architectures, structures, and systems
  • and plan, design, and optimize data throughput and query performance
  • Participate in the selection of backend database technologies (e.g. SQL, NoSQL, etc.), its configuration and utilization, and the optimization of the full data pipeline infrastructure to support the actual content, volume, ETL, and periodicity of data to support the intended kinds of queries and analysis to match expected responsiveness
  • Assist and advise the Government with developing, constructing, and maintaining data architectures
  • Research, study, and present technical information, in the form of briefings or written papers, on relevant data engineering methodologies and technologies of interest to or as requested by the Government
  • Align data architecture, acquisition, and processes with intelligence and analytic requirements
  • Prepare data for predictive and prescriptive modeling deploying analytics programs, machine learning and statistical methods to find hidden patterns, discover tasks and processes which can be automated and make recommendations to streamline data processes and visualizations
Read More
Arrow Right

Big Data / Scala / Python Engineering Lead

The Applications Development Technology Lead Analyst is a senior level position ...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least two years (Over all 10+ hands on Data Engineering experience) of experience building and leading highly complex, technical data engineering teams
  • Lead data engineering team, from sourcing to closing
  • Drive strategic vision for the team and product
  • Experience managing an data focused product, ML platform
  • Hands on experience relevant experience in design, develop, and optimize scalable distributed data processing pipelines using Apache Spark and Scala
  • Experience managing, hiring and coaching software engineering teams
  • Experience with large-scale distributed web services and the processes around testing, monitoring, and SLAs to ensure high product quality
  • 7 to 10+ years of hands-on experience in big data development, focusing on Apache Spark, Scala, and distributed systems
  • Proficiency in Functional Programming: High proficiency in Scala-based functional programming for developing robust and efficient data processing pipelines
  • Proficiency in Big Data Technologies: Strong experience with Apache Spark, Hadoop ecosystem tools such as Hive, HDFS, and YARN
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Fulltime
Read More
Arrow Right

Data Engineer - II

The Data Engineer will design, develop, and maintain scalable data pipelines and...
Location
Location
India , Pune
Salary
Salary:
Not provided
aticaglobal.com Logo
Atica Global
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in computer science, Engineering, Mathematics, a related field, or equivalent practical experience
  • 3-5 years of experience in data engineering or a similar mid-level role
  • Proficiency in Python and SQL
  • experience with Java is a plus
  • Hands-on experience with AWS, Airbyte, DBT, PostgreSQL, MongoDB, Airflow, and Spark
  • Familiarity with data storage solutions such as PostgreSQL, MongoDB
  • Experience with BigQuery (setup, management and scaling)
  • Strong understanding of data modeling, ETL/ELT processes, and database systems
  • Experience with data extraction, batch processing and data warehousing
  • Excellent problem-solving skills and a keen attention to detail
Job Responsibility
Job Responsibility
  • Design, develop, and maintain scalable data pipelines and ETL/ELT processes using tools like Airflow, Airbyte and PySpark
  • Collaborate with software engineers and analysts to ensure data availability and integrity for various applications
  • Design and implement robust data pipelines to extract, transform, and load (ETL) data from various sources
  • Utilize Airflow for orchestrating complex workflows and managing data pipelines
  • Implement batch processing techniques using Airflow/PySpark to handle large volumes of data efficiently
  • Develop ELT processes to optimize data extraction and transformation within the target data warehouse
  • Leverage AWS services (e.g., S3, RDS, Lambda) for data storage, processing, and orchestration
  • Ensure data security, reliability, and performance when utilizing AWS resources
  • Work closely with developers, analysts, and other stakeholders to understand data requirements and provide the necessary data infrastructure
  • Assist in troubleshooting and optimizing existing data workflows and queries
What we offer
What we offer
  • Competitive salary and benefits package
  • Comprehensive Health Care benefits (best in the country, includes IPD+OPD, covers Employee, Spouse and two children)
  • Growth and advancement opportunities within a rapidly expanding company
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We’re hiring a Senior Data Engineer with strong experience in AWS and Databricks...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
appen.com Logo
Appen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5-7 years of hands-on experience with AWS data engineering technologies, such as Amazon Redshift, AWS Glue, AWS Data Pipeline, Amazon Kinesis, Amazon RDS, and Apache Airflow
  • Hands-on experience working with Databricks, including Delta Lake, Apache Spark (Python or Scala), and Unity Catalog
  • Demonstrated proficiency in SQL and NoSQL databases, ETL tools, and data pipeline workflows
  • Experience with Python, and/or Java
  • Deep understanding of data structures, data modeling, and software architecture
  • Strong problem-solving skills and attention to detail
  • Self-motivated and able to work independently, with excellent organizational and multitasking skills
  • Exceptional communication skills, with the ability to explain complex data concepts to non-technical stakeholders
  • Bachelor's Degree in Computer Science, Information Systems, or a related field. A Master's Degree is preferred.
Job Responsibility
Job Responsibility
  • Design, build, and manage large-scale data infrastructures using a variety of AWS technologies such as Amazon Redshift, AWS Glue, Amazon Athena, AWS Data Pipeline, Amazon Kinesis, Amazon EMR, and Amazon RDS
  • Design, develop, and maintain scalable data pipelines and architectures on Databricks using tools such as Delta Lake, Unity Catalog, and Apache Spark (Python or Scala), or similar technologies
  • Integrate Databricks with cloud platforms like AWS to ensure smooth and secure data flow across systems
  • Build and automate CI/CD pipelines for deploying, testing, and monitoring Databricks workflows and data jobs
  • Continuously optimize data workflows for performance, reliability, and security, applying Databricks best practices around data governance and quality
  • Ensure the performance, availability, and security of datasets across the organization, utilizing AWS’s robust suite of tools for data management
  • Collaborate with data scientists, software engineers, product managers, and other key stakeholders to develop data-driven solutions and models
  • Translate complex functional and technical requirements into detailed design proposals and implement them
  • Mentor junior and mid-level data engineers, fostering a culture of continuous learning and improvement within the team
  • Identify, troubleshoot, and resolve complex data-related issues
  • Fulltime
Read More
Arrow Right

Principal Data Engineer

The Principal Data Engineer will lead the design, development, and optimization ...
Location
Location
United States , Washington DC; Philadelphia PA; Wilmington DE
Salary
Salary:
113200.00 - 146664.00 USD / Year
amtrak.com Logo
AMTRAK
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s Degree or equivalent combination of education, training and/or relevant experience
  • Plus 7 years of relevant work experience
  • Proficiency and hands on experience designing and implementing end-to-end data solutions in AWS (e.g., S3, EMR, Glue, Redshift, Kenesis) and/or Azure (e.g. Azure Data Factory, Synaps Analytics, Azure Data Lakre Storage)
  • Experience with some of the following technologies, Databricks, Python, Apche Sparks, IDMC, Talend, CI/CD pipelines, Jenkins, Gitlab, Power BI, SQL and NoSQL
Job Responsibility
Job Responsibility
  • Architect and Design: Define and implement the architectural strategy for our enterprise data platform, focusing on scalability, security, and performance. Design and build robust, high-volume, and performant data pipelines using cloud-native services like AWS Data Pipelines (Glue, EMR, S3, Redshift, etc.) or Azure Data Factory/Synapse Analytics
  • Technical Leadership: Act as a subject matter expert and mentor for junior and mid-level data engineers, setting best practices for code quality, testing, and deployment
  • Data Governance & Management: Utilize tools like IDMC (Informatica Data Management Cloud) for data integration, quality, governance, and cataloging across the enterprise
  • Data Processing and Analysis: Develop, optimize, and manage large-scale data processing jobs using Databricks(Spark/Delta Lake) for ETL/ELT workflows and advanced analytics
  • Coding and Scripting: Write high-quality, efficient, and well-documented code primarily in Python for data manipulation, automation, and pipeline orchestration
  • Deployment and Automation: Implement and maintain robust CI/CD pipelines and infrastructure-as-code (e.g., Terraform/CloudFormation) for automated deployment and management of data solutions
  • Business Intelligence: Ensure data readiness for reporting and analytics and possess working knowledge/experience with BI tools like Tableau and PowerBI to facilitate data consumption
  • Performance and Optimization: Monitor, troubleshoot, and tune existing data infrastructure and pipelines to ensure optimal performance and cost efficiency
What we offer
What we offer
  • health, dental, and vision plans
  • health savings accounts
  • wellness programs
  • flexible spending accounts
  • 401K retirement plan with employer match
  • life insurance
  • short and long term disability insurance
  • paid time off
  • back-up care
  • adoption assistance
  • Fulltime
Read More
Arrow Right