CrawlJobs Logo

Senior Data Engineer (ETL+Spark)

eleks.com Logo

ELEKS

Location Icon

Location:
Argentina , Buenos Aires

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Our client is a leading US‑based insurance company serving more than 25 million members. ELEKS is now expanding its team in Argentina to support the delivery of secure, high‑quality technology solutions across the client’s enterprise ecosystem. One of the primary focuses of the Data Engineering team will be to optimize, improve, and systematize data flows across the corporation, ensuring they fully comply with all required security, governance, and privacy protocols.

Job Responsibility:

  • Design, develop, and maintain reliable software in line with technical requirements, focusing on performance and availability
  • Analyze requirements, review designs, and estimate user stories following project methodology (Agile, Waterfall, etc)
  • Proactively propose code refactoring and optimization improvements according to the best software development practices and coding standards
  • Help maintain and improve high-quality standards within the developer community by sharing knowledge, conducting tech talks, and participating in the internal promotion verification process
  • Stay up-to-date with modern technology and obtain professional certifications
  • Support less experienced developers by providing training, distributing, and monitoring tasks

Requirements:

  • 5+ years of experience as a Data Engineer
  • Strong proficiency with Databricks (3+ years), Spark, and ETL pipelines
  • Experience with Delta Lake
  • Familiarity with the Hadoop ecosystem
  • Experience or understanding of on‑prem HDFS
  • Upper-Intermediate level of English
What we offer:
  • Close cooperation with a customer
  • Challenging tasks
  • Competence development
  • Ability to influence project technologies
  • Team of professionals
  • Dynamic environment with low level of bureaucracy

Additional Information:

Job Posted:
March 18, 2026

Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Data Engineer (ETL+Spark)

Senior Big Data Engineer

The Big Data Engineer is a senior level position responsible for establishing an...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience in Big Data Engineering (PySpark)
  • Data Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from multiple sources
  • Big Data Infrastructure: Develop and manage large-scale data processing systems using frameworks like Apache Spark, Hadoop, and Kafka
  • Proficiency in programming languages like Python, or Scala
  • Strong expertise in data processing frameworks such as Apache Spark, Hadoop
  • Expertise in Data Lakehouse technologies (Apache Iceberg, Apache Hudi, Trino)
  • Experience with cloud data platforms like AWS (Glue, EMR, Redshift), Azure (Synapse), or GCP (BigQuery)
  • Expertise in SQL and database technologies (e.g., Oracle, PostgreSQL, etc.)
  • Experience with data orchestration tools like Apache Airflow or Prefect
  • Familiarity with containerization (Docker, Kubernetes) is a plus
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer to design, develop, and optimize data platforms, pipelines,...
Location
Location
United States , Chicago
Salary
Salary:
160555.00 - 176610.00 USD / Year
adtalem.com Logo
Adtalem Global Education
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master's degree in Engineering Management, Software Engineering, Computer Science, or a related technical field
  • 3 years of experience in data engineering
  • Experience building data platforms and pipelines
  • Experience with AWS, GCP or Azure
  • Experience with SQL and Python for data manipulation, transformation, and automation
  • Experience with Apache Airflow for workflow orchestration
  • Experience with data governance, data quality, data lineage and metadata management
  • Experience with real-time data ingestion tools including Pub/Sub, Kafka, or Spark
  • Experience with CI/CD pipelines for continuous deployment and delivery of data products
  • Experience maintaining technical records and system designs
Job Responsibility
Job Responsibility
  • Design, develop, and optimize data platforms, pipelines, and governance frameworks
  • Enhance business intelligence, analytics, and AI capabilities
  • Ensure accurate data flows and push data-driven decision-making across teams
  • Write product-grade performant code for data extraction, transformations, and loading (ETL) using SQL/Python
  • Manage workflows and scheduling using Apache Airflow and build custom operators for data ETL
  • Build, deploy and maintain both inbound and outbound data pipelines to integrate diverse data sources
  • Develop and manage CI/CD pipelines to support continuous deployment of data products
  • Utilize Google Cloud Platform (GCP) tools, including BigQuery, Composer, GCS, DataStream, and Dataflow, for building scalable data systems
  • Implement real-time data ingestion solutions using GCP Pub/Sub, Kafka, or Spark
  • Develop and expose REST APIs for sharing data across teams
What we offer
What we offer
  • Health, dental, vision, life and disability insurance
  • 401k Retirement Program + 6% employer match
  • Participation in Adtalem’s Flexible Time Off (FTO) Policy
  • 12 Paid Holidays
  • Annual incentive program
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We’re hiring a Senior Data Engineer with strong experience in AWS and Databricks...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
appen.com Logo
Appen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5-7 years of hands-on experience with AWS data engineering technologies, such as Amazon Redshift, AWS Glue, AWS Data Pipeline, Amazon Kinesis, Amazon RDS, and Apache Airflow
  • Hands-on experience working with Databricks, including Delta Lake, Apache Spark (Python or Scala), and Unity Catalog
  • Demonstrated proficiency in SQL and NoSQL databases, ETL tools, and data pipeline workflows
  • Experience with Python, and/or Java
  • Deep understanding of data structures, data modeling, and software architecture
  • Strong problem-solving skills and attention to detail
  • Self-motivated and able to work independently, with excellent organizational and multitasking skills
  • Exceptional communication skills, with the ability to explain complex data concepts to non-technical stakeholders
  • Bachelor's Degree in Computer Science, Information Systems, or a related field. A Master's Degree is preferred.
Job Responsibility
Job Responsibility
  • Design, build, and manage large-scale data infrastructures using a variety of AWS technologies such as Amazon Redshift, AWS Glue, Amazon Athena, AWS Data Pipeline, Amazon Kinesis, Amazon EMR, and Amazon RDS
  • Design, develop, and maintain scalable data pipelines and architectures on Databricks using tools such as Delta Lake, Unity Catalog, and Apache Spark (Python or Scala), or similar technologies
  • Integrate Databricks with cloud platforms like AWS to ensure smooth and secure data flow across systems
  • Build and automate CI/CD pipelines for deploying, testing, and monitoring Databricks workflows and data jobs
  • Continuously optimize data workflows for performance, reliability, and security, applying Databricks best practices around data governance and quality
  • Ensure the performance, availability, and security of datasets across the organization, utilizing AWS’s robust suite of tools for data management
  • Collaborate with data scientists, software engineers, product managers, and other key stakeholders to develop data-driven solutions and models
  • Translate complex functional and technical requirements into detailed design proposals and implement them
  • Mentor junior and mid-level data engineers, fostering a culture of continuous learning and improvement within the team
  • Identify, troubleshoot, and resolve complex data-related issues
  • Fulltime
Read More
Arrow Right

Senior Data Engineer - Catalog

Join the Catalog team and be part of Deezer’s next steps. We ingest, reconcile a...
Location
Location
France , Paris
Salary
Salary:
Not provided
deezer.com Logo
Deezer
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in Data Engineering
  • Familiarity with data quality frameworks and best practices
  • Solid experience with Python, SQL, BigQuery, Spark/Scala, ETL
  • Experience with entity matching, data reconciliation or data science applied to quality problems is a strong plus
  • Proactive, organised and with strong team spirit
  • Passion for music and improving user experience through better metadata
  • Fluent in English
Job Responsibility
Job Responsibility
  • Design and maintain ETL pipelines to ingest and process large volumes of catalog data from disparate sources
  • Ingest and match entities like artists, albums, concerts and lyrics
  • Develop solutions to improve metadata quality at scale
  • Contribute to projects like: Artist disambiguation and source-of-truth building
  • Album grouping via ML models in collaboration with Data scientists
  • Build analytics and quality KPIs to monitor catalog performance
  • Collaborate closely with Data Scientists, Researchers, Product Managers and partners to implement new solutions
What we offer
What we offer
  • A Deezer premium family account for free
  • Access to gym classes
  • Join over 70 Deezer Communities
  • Deezer parties several times a year and drinks every thursday
  • Allowance for sports, travelling and culture
  • Meal vouchers
  • Mental health and well-being support from Moka.Care
  • Great offices
  • Hybrid remote work policy
  • Fulltime
Read More
Arrow Right

Senior Data Engineering Architect

Location
Location
Poland
Salary
Salary:
Not provided
lingarogroup.com Logo
Lingaro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven work experience as a Data Engineering Architect or a similar role and strong experience in in the Data & Analytics area
  • Strong understanding of data engineering concepts, including data modeling, ETL processes, data pipelines, and data governance
  • Expertise in designing and implementing scalable and efficient data processing frameworks
  • In-depth knowledge of various data technologies and tools, such as relational databases, NoSQL databases, data lakes, data warehouses, and big data frameworks (e.g., Hadoop, Spark)
  • Experience in selecting and integrating appropriate technologies to meet business requirements and long-term data strategy
  • Ability to work closely with stakeholders to understand business needs and translate them into data engineering solutions
  • Strong analytical and problem-solving skills, with the ability to identify and address complex data engineering challenges
  • Proficiency in Python, PySpark, SQL
  • Familiarity with cloud platforms and services, such as AWS, GCP, or Azure, and experience in designing and implementing data solutions in a cloud environment
  • Knowledge of data governance principles and best practices, including data privacy and security regulations
Job Responsibility
Job Responsibility
  • Collaborate with stakeholders to understand business requirements and translate them into data engineering solutions
  • Design and oversee the overall data architecture and infrastructure, ensuring scalability, performance, security, maintainability, and adherence to industry best practices
  • Define data models and data schemas to meet business needs, considering factors such as data volume, velocity, variety, and veracity
  • Select and integrate appropriate data technologies and tools, such as databases, data lakes, data warehouses, and big data frameworks, to support data processing and analysis
  • Create scalable and efficient data processing frameworks, including ETL (Extract, Transform, Load) processes, data pipelines, and data integration solutions
  • Ensure that data engineering solutions align with the organization's long-term data strategy and goals
  • Evaluate and recommend data governance strategies and practices, including data privacy, security, and compliance measures
  • Collaborate with data scientists, analysts, and other stakeholders to define data requirements and enable effective data analysis and reporting
  • Provide technical guidance and expertise to data engineering teams, promoting best practices and ensuring high-quality deliverables. Support to team throughout the implementation process, answering questions and addressing issues as they arise
  • Oversee the implementation of the solution, ensuring that it is implemented according to the design documents and technical specifications
What we offer
What we offer
  • Stable employment. On the market since 2008, 1500+ talents currently on board in 7 global sites
  • Workation. Enjoy working from inspiring locations in line with our workation policy
  • Great Place to Work® certified employer
  • Flexibility regarding working hours and your preferred form of contract
  • Comprehensive online onboarding program with a “Buddy” from day 1
  • Cooperation with top-tier engineers and experts
  • Unlimited access to the Udemy learning platform from day 1
  • Certificate training programs. Lingarians earn 500+ technology certificates yearly
  • Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly
  • Grow as we grow as a company. 76% of our managers are internal promotions
Read More
Arrow Right

Senior Big Data Engineer

Location
Location
United States , Flowood
Salary
Salary:
Not provided
phasorsoft.com Logo
PhasorSoft Group
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proficiency in Python programming for data manipulation and analysis
  • Experience with PySpark for processing large-scale data
  • Strong understanding and practical experience with big data technologies such as Hadoop, Spark, Kafka, etc.
  • Knowledge of designing and implementing ETL processes for data integration
  • Ability to work with large datasets, perform data cleansing, transformations, and aggregations
  • Familiarity with machine learning concepts and experience implementing ML models
  • Understanding of data governance principles and experience implementing data security measures
  • Ability to create clear and concise documentation for data pipelines and processes
  • Strong teamwork and collaboration skills to work with cross-functional teams
  • Analytical and problem-solving skills to optimize data workflows and processes
Job Responsibility
Job Responsibility
  • Design and develop scalable data pipelines and solutions using Python and PySpark
  • Utilize big data technologies such as Hadoop, Spark, Kafka, or similar tools for processing and analyzing large datasets
  • Develop and maintain ETL processes to extract, transform, and load data into data lakes or warehouses
  • Collaborate with data engineers and scientists to implement machine learning models and algorithms
  • Optimize and tune data processing workflows for performance and efficiency
  • Implement data governance and security measures to ensure data integrity and privacy
  • Create and maintain documentation for data pipelines, workflows, and processes
  • Provide technical leadership and mentorship to junior team members
  • Fulltime
Read More
Arrow Right

Senior Software Engineer - Data Infrastructure

We build the data and machine learning infrastructure to enable Plaid engineers ...
Location
Location
United States , San Francisco
Salary
Salary:
180000.00 - 270000.00 USD / Year
plaid.com Logo
Plaid
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of software engineering experience
  • Extensive hands-on software engineering experience, with a strong track record of delivering successful projects within the Data Infrastructure or Platform domain at similar or larger companies
  • Deep understanding of one of: ML Infrastructure systems, including Feature Stores, Training Infrastructure, Serving Infrastructure, and Model Monitoring OR Data Infrastructure systems, including Data Warehouses, Data Lakehouses, Apache Spark, Streaming Infrastructure, Workflow Orchestration
  • Strong cross-functional collaboration, communication, and project management skills, with proven ability to coordinate effectively
  • Proficiency in coding, testing, and system design, ensuring reliable and scalable solutions
  • Demonstrated leadership abilities, including experience mentoring and guiding junior engineers
Job Responsibility
Job Responsibility
  • Contribute towards the long-term technical roadmap for data-driven and machine learning iteration at Plaid
  • Leading key data infrastructure projects such as improving ML development golden paths, implementing offline streaming solutions for data freshness, building net new ETL pipeline infrastructure, and evolving data warehouse or data lakehouse capabilities
  • Working with stakeholders in other teams and functions to define technical roadmaps for key backend systems and abstractions across Plaid
  • Debugging, troubleshooting, and reducing operational burden for our Data Platform
  • Growing the team via mentorship and leadership, reviewing technical documents and code changes
What we offer
What we offer
  • medical, dental, vision, and 401(k)
  • equity and/or commission
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Radix is building the most trusted data and analytics platform in multifamily. J...
Location
Location
United States , Scottsdale
Salary
Salary:
Not provided
radix.com Logo
Radix (AZ)
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in data engineering or backend systems, with 2+ years collaborating closely with analytical or scientific practitioners
  • Experience designing at least one end-to-end data application (visualization, model pipeline, etc.) for production use
  • Strong understanding of data modeling, batch processing, and streaming systems
  • Hands-on experience with SQL/NoSQL databases, cloud storage, file-based datasets, and Infrastructure-as-code
  • Experience building or operating data pipelines on AWS cloud services (Lambda, S3, RDS, ECS)
  • Understanding of AI/LLM integration and prompt engineering fundamentals
  • Proficiency with Git/GitHub, and familiarity with Spark and Kubernetes
  • Strong problem-solving skills, ownership, and ability to identify root causes of technical debt and recurring problems
  • Demonstrated ability to translate Product/Science requirements into technical plans and hold teams accountable to delivery timelines
  • Undergraduate degree in computer science, computer engineering, software engineering, or equivalent
Job Responsibility
Job Responsibility
  • Build scalable ETL/ELT pipelines for ingesting structured and unstructured data (Excel, JSON, PDFs, APIs)
  • Design and maintain data pipelines using SQL, Python, Node.js, or TypeScript
  • Work with distributed compute systems (Spark, Kubernetes), message queues, and streaming data
  • Manage and optimize data storage in MongoDB, PostgreSQL, Redis, Snowflake, and S3
  • Develop clean, standardized data schemas and event-driven transformations
  • Integrate AI-assisted parsers, mappers, and LLM-supported transformations
  • Collaborate with backend, analytics, product, and AI teams to break down requirements into well-defined engineering problems
  • Implement monitoring, data validation, and reliability checks (DQ rules, freshness, duplication)
  • Own production readiness, including on-call responsibilities and incident follow-ups
  • Mentor engineers, conduct thorough code reviews, and introduce patterns that raise the team's technical bar
What we offer
What we offer
  • Medical, dental and vision coverage designed to support your wellbeing
  • Pre-IPO Equity
  • Performance Bonus
  • Learn From the Best
  • Build Category-Defining Products
Read More
Arrow Right