CrawlJobs Logo

Senior Cloud Engineer, Data Processing

aptiv.com Logo

Aptiv plc

Location Icon

Location:
India , Chennai

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

As a Senior Cloud Engineer, Data Processing on the Sustaining Engineering L3 team, you will support, enhance, and maintain Aptiv’s cloud-based data processing infrastructure for automotive datalogger data. This includes systems for data ingest, ETL pipelines, real-time and batch processing, and analytics. You’ll work with technologies such as Kubernetes, Python, Go, MySQL, MongoDB, message queues, and other scripting languages to ensure scalable, reliable, and efficient data flow across production environments.

Job Responsibility:

  • Support and enhance Aptiv’s cloud-based data processing systems for automotive datalogger data, including ingest pipelines, ETL workflows, and analytics infrastructure
  • Investigate, root-cause, and resolve production issues across distributed systems, often involving complex legacy code and minimal documentation
  • Collaborate with systems analysts, engineers, and developers to troubleshoot issues, implement improvements, and ensure system reliability and performance
  • Partner with internal stakeholders to develop and execute validation plans that confirm fixes and enhancements meet operational and customer expectations
  • Stay current with evolving cloud technologies, data processing frameworks, and tooling to continuously improve support capabilities and system efficiency

Requirements:

  • Bachelor's Degree – Computer Science, Computer Engineering, or similar
  • 8+ years Python and/or Golang software development experience
  • Proven ability to analyze and navigate legacy codebases, including independently investigating, debugging, and resolving issues without detailed documentation
  • Demonstrated skill at navigating ambiguity and resolving issues without detailed instructions or oversight
  • Experience interfacing applications with relational and non-relational databases (e.g., MySQL, MongoDB), including CRUD operations and schema design
  • Proficient in Linux environments and shell scripting
  • Deep experience with Kubernetes, AWS, RabbitMQ, MongoDB, and MySQL in production settings
  • Familiarity with debugging tools, performance profiling, and system optimization techniques
  • Strong written and oral communication skills, with the ability to clearly document and explain technical concepts
What we offer:
  • Higher Education Opportunities (UDACITY, UDEMY, COURSERA are available for your continuous growth and development)
  • Life and accident insurance
  • Sodexo cards for food and beverages
  • Well Being Program that includes regular workshops and networking events
  • EAP Employee Assistance
  • Access to fitness clubs (T&C apply)
  • Creche facility for working parents

Additional Information:

Job Posted:
February 19, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Cloud Engineer, Data Processing

Senior Data Engineering Manager

Data is a big deal at Atlassian. We ingest billions of events each month into ou...
Location
Location
United States , San Francisco
Salary
Salary:
168700.00 - 271100.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • stellar people management skills and experience in leading an agile software team
  • thrive when developing phenomenal people, not just great products
  • worked closely Data Science, analytics and platform teams
  • expertise in building and maintaining high-quality components and services
  • able to drive technical excellence, pushing for innovation and quality
  • at least 10 years experience in a software development role as an individual contributor
  • 4+ years of people management experience
  • deep understanding of data challenges at scale challenges and the eco-system
  • experience with solution building and architecting with public cloud offerings such as Amazon Web Services, DynamoDB, ElasticSearch, S3, Databricks, Spark/Spark-Streaming, GraphDatabases
  • experience with Enterprise Data architectural standard methodologies
Job Responsibility
Job Responsibility
  • build and lead a team of data engineers through hiring, coaching, mentoring, and hands-on career development
  • provide deep technical guidance in a number of aspects of data engineering in a scalable ecosystem
  • champion cultural and process improvements through engineering excellence, quality and efficiency
  • work with close counterparts in other departments as part of a multi-functional team, and build this culture in your team
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a Senior Data Engineer (SDE 3) to build scalable, high-perfor...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
https://cogoport.com/ Logo
Cogoport
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in data engineering, working with large-scale distributed systems
  • Strong proficiency in Python, Java, or Scala for data processing
  • Expertise in SQL and NoSQL databases (PostgreSQL, Cassandra, Snowflake, Apache Hive, Redshift)
  • Experience with big data processing frameworks (Apache Spark, Flink, Hadoop)
  • Hands-on experience with real-time data streaming (Kafka, Kinesis, Pulsar) for logistics use cases
  • Deep knowledge of AWS/GCP/Azure cloud data services like S3, Glue, EMR, Databricks, or equivalent
  • Familiarity with Airflow, Prefect, or Dagster for workflow orchestration
  • Strong understanding of logistics and supply chain data structures, including freight pricing models, carrier APIs, and shipment tracking systems
Job Responsibility
Job Responsibility
  • Design and develop real-time and batch ETL/ELT pipelines for structured and unstructured logistics data (freight rates, shipping schedules, tracking events, etc.)
  • Optimize data ingestion, transformation, and storage for high availability and cost efficiency
  • Ensure seamless integration of data from global trade platforms, carrier APIs, and operational databases
  • Architect scalable, cloud-native data platforms using AWS (S3, Glue, EMR, Redshift), GCP (BigQuery, Dataflow), or Azure
  • Build and manage data lakes, warehouses, and real-time processing frameworks to support analytics, machine learning, and reporting needs
  • Optimize distributed databases (Snowflake, Redshift, BigQuery, Apache Hive) for logistics analytics
  • Develop streaming data solutions using Apache Kafka, Pulsar, or Kinesis to power real-time shipment tracking, anomaly detection, and dynamic pricing
  • Enable AI-driven freight rate predictions, demand forecasting, and shipment delay analytics
  • Improve customer experience by providing real-time visibility into supply chain disruptions and delivery timeline
  • Ensure high availability, fault tolerance, and data security compliance (GDPR, CCPA) across the platform
What we offer
What we offer
  • Work with some of the brightest minds in the industry
  • Entrepreneurial culture fostering innovation, impact, and career growth
  • Opportunity to work on real-world logistics challenges
  • Collaborate with cross-functional teams across data science, engineering, and product
  • Be part of a fast-growing company scaling next-gen logistics platforms using advanced data engineering and AI
  • Fulltime
Read More
Arrow Right

Senior Data Engineering Manager

Data is a big deal at Atlassian. We ingest billions events each month into our a...
Location
Location
United States , San Francisco
Salary
Salary:
168700.00 - 271100.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • stellar people management skills and experience in leading an agile software team
  • thrive when developing phenomenal people, not just great products
  • worked closely Data Science, analytics and platform teams
  • expertise in building and maintaining high-quality components and services
  • able to drive technical excellence, pushing for innovation and quality
  • at least 10 years experience in a software development role as an individual contributor
  • 4+ years of people management experience
  • deep understanding of data challenges at scale challenges and the eco-system
  • experience with solution building and architecting with public cloud offerings such as Amazon Web Services, DynamoDB, ElasticSearch, S3, Databricks, Spark/Spark-Streaming, GraphDatabases
  • experience with Enterprise Data architectural standard methodologies
Job Responsibility
Job Responsibility
  • build and lead a team of data engineers through hiring, coaching, mentoring, and hands-on career development
  • provide deep technical guidance in a number of aspects of data engineering in a scalable ecosystem
  • champion cultural and process improvements through engineering excellence, quality and efficiency
  • work with close counterparts in other departments as part of a multi-functional team, and build this culture in your team
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

As a senior data engineer, you will help our clients with building a variety of ...
Location
Location
Belgium , Brussels
Salary
Salary:
Not provided
https://www.soprasteria.com Logo
Sopra Steria
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 5 years of experience as a Data Engineer or in software engineering in a data context
  • Programming experience with one or more languages: Python, Scala, Java, C/C++
  • Knowledge of relational database technologies/concepts and SQL is required
  • Experience building, scheduling and maintaining data pipelines (Spark, Airflow, Data Factory)
  • Practical experience with at least one cloud provider (GCP, AWS or Azure). Certifications from any of these is considered a plus
  • Knowledge of Git and CI/CD
  • Able to work independently, prioritize multiple stakeholders and tasks, and manage work time effectively
  • You have a degree in Computer Engineering, Information Technology or related field
  • You are proficient in English, knowledge of Dutch and/or French is a plus.
Job Responsibility
Job Responsibility
  • Gather business requirements and translate them to technical specifications
  • Design, implement and orchestrate scalable and efficient data pipelines to collect, process, and serve large datasets
  • Apply DataOps best practices to automate testing, deployment and monitoring
  • Continuously follow & learn the latest trends in the data world.
What we offer
What we offer
  • A variety of perks, such as mobility options (including a company car), insurance coverage, meal vouchers, eco-cheques, and more
  • Continuous learning opportunities through the Sopra Steria Academy to support your career development
  • The opportunity to connect with fellow Sopra Steria colleagues at various team events.
Read More
Arrow Right

SASE Senior Cloud Engineer

The Cloud Developer builds from the ground up to meet the needs of mission-criti...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's or master's degree in computer science, engineering, information systems, or closely related quantitative discipline
  • Typically, 7-10 years’ experience
  • Strong web programming skills in DotNet, Java, Python or JavaScript
  • Understanding microservice architecture and how they can be built in a containerized, Kubernetes-managed environment is central to all modern cloud-native applications
  • DB skills with MySQL, PostGres or similar RDBMS is a must
  • Cloud experience with AWS is preferred
  • Experience in front end technologies like HTML, CSS, JS, ReactJS is preferred
Job Responsibility
Job Responsibility
  • Leads the project team for design and development of complex products and platforms, including solution design, analysis, coding, testing, and integration for building efficient, scalable and robust cloud subsystems
  • Reviews and evaluates designs, test plans, and develops code for compliance with cloud design and development guidelines and standards
  • Provides tangible feedback to improve product quality and mitigate risks
  • Represents the engineering team in various technical forums and provides guidance and mentoring to less-experienced team members
  • Drives innovation and integration of new technologies into projects and activities in the software systems design organization
  • Analyzes science, engineering, business, and other data processing problems to develop and implement solutions to complex application problems, system administration issues, or network concerns
What we offer
What we offer
  • Health & Wellbeing
  • Personal & Professional Development
  • Unconditional Inclusion
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a Senior Data Engineer with a collaborative, “can-do” attitud...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s Degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred
  • 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
  • 5+ years of experience with setting up and operating data pipelines using Python or SQL
  • 5+ years of advanced SQL Programming: PL/SQL, T-SQL
  • 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization
  • Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
  • 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
  • 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
  • 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring
  • Strong analytical abilities and a strong intellectual curiosity
Job Responsibility
Job Responsibility
  • Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals
  • Demonstrate deep technical and domain knowledge of relational and non-relation databases, Data Warehouses, Data lakes among other structured and unstructured storage options
  • Determine solutions that are best suited to develop a pipeline for a particular data source
  • Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development
  • Efficient in ETL/ELT development using Azure cloud services and Snowflake, Testing and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance)
  • Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics delivery
  • Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders
  • Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability)
  • Stay current with and adopt new tools and applications to ensure high quality and efficient solutions
  • Build cross-platform data strategy to aggregate multiple sources and process development datasets
  • Fulltime
Read More
Arrow Right

Senior SSE Data Engineer

Designs, develops, troubleshoots and debugs software programs for software enhan...
Location
Location
Israel , Tel Aviv
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's or Master's degree in Computer Science, Information Systems, or equivalent
  • Typically 6-10 years experience
  • Extensive experience with multiple software systems design tools and languages
  • Excellent analytical and problem solving skills
  • Experience in overall architecture of software systems for products and solutions
  • Designing and integrating software systems running on multiple platform types into overall architecture
  • Evaluating forms and processes for software systems testing and methodology, including writing and execution of test plans, debugging, and testing scripts and tools
  • Excellent written and verbal communication skills
  • mastery in English and local language
  • Ability to effectively communicate product architectures, design proposals and negotiate options at senior management levels
Job Responsibility
Job Responsibility
  • Leads multiple project teams of other software systems engineers and internal and outsourced development partners responsible for all stages of design and development for complex products and platforms, including solution design, analysis, coding, testing, and integration
  • Manages and expands relationships with internal and outsourced development partners on software systems design and development
  • Reviews and evaluates designs and project activities for compliance with systems design and development guidelines and standards
  • provides tangible feedback to improve product quality and mitigate failure risk
  • Provides domain-specific expertise and overall software systems leadership and perspective to cross-organization projects, programs, and activities
  • Drives innovation and integration of new technologies into projects and activities in the software systems design organization
  • Provides guidance and mentoring to less- experienced staff members
What we offer
What we offer
  • Health & Wellbeing
  • Personal & Professional Development
  • Unconditional Inclusion
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer role driving Circle K's cloud-first strategy to unlock the ...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree in Computer Engineering, Computer Science or related discipline
  • Master's Degree preferred
  • 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
  • 5+ years of experience with setting up and operating data pipelines using Python or SQL
  • 5+ years of advanced SQL Programming: PL/SQL, T-SQL
  • 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization
  • Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
  • 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
  • 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
  • 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring
Job Responsibility
Job Responsibility
  • Collaborate with business stakeholders and other technical team members to acquire and migrate data sources
  • Determine solutions that are best suited to develop a pipeline for a particular data source
  • Develop data flow pipelines to extract, transform, and load data from various data sources
  • Efficient in ETL/ELT development using Azure cloud services and Snowflake
  • Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines
  • Provide clear documentation for delivered solutions and processes
  • Identify and implement internal process improvements for data management
  • Stay current with and adopt new tools and applications
  • Build cross-platform data strategy to aggregate multiple sources
  • Proactive in stakeholder communication, mentor/guide junior resources
  • Fulltime
Read More
Arrow Right