This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As a Cloud Engineer on the Data Processing team, you will contribute to development of Aptiv’s cloud-based data processing infrastructure for automotive datalogger data. This includes systems for data ingest, ETL pipelines, real-time and batch processing, and analytics. You’ll work with technologies such as Kubernetes, Python, Go, MySQL, MongoDB, message queues, and other scripting languages to ensure scalable, reliable, and efficient data flow across production environments.
Job Responsibility:
Design, develop, test, and deploy software solutions for Aptiv’s cloud-based data processing systems for automotive datalogger data, including ingest pipelines, ETL workflows, and analytics infrastructure
Investigate, root-cause, and resolve production issues across distributed systems
Collaborate with systems analysts, engineers, and developers to design systems and to obtain information on project limitations and capabilities, performance requirements, and interfaces
Modify existing software to correct errors, allow it to adapt to new platform, or improve its performance
Partner with internal stakeholders to develop and execute validation plans that confirm fixes and enhancements meet operational and customer expectations
Stay current with evolving cloud technologies, data processing frameworks, and tooling to continuously improve support capabilities and system efficiency
Requirements:
Bachelor's Degree – Computer Science, Computer Engineering, or similar
5+ years Python and/or Golang (preferred) software development experience
Proven ability to analyze and navigate legacy codebases, including independently investigating, debugging, and resolving issues without detailed documentation
Demonstrated skill at navigating ambiguity and resolving issues without detailed instructions or oversight
Experience interfacing applications with relational and non-relational databases (e.g., MySQL, MongoDB), including CRUD operations and schema design
Proficient in Linux environments and shell scripting
Deep experience with Kubernetes, AWS, RabbitMQ, MongoDB, and MySQL in production settings
Familiarity with debugging tools, performance profiling, and system optimization techniques
Strong written and oral communication skills, with the ability to clearly document and explain technical concepts
Nice to have:
Master’s degree in computer science, computer engineering, or related degree programs
Understanding microservices and deployment with helm a plus
Experience working with media file processing a plus
File I/O and file management experience with AWS S3 a plus
Demonstrated enthusiasm for AI-assisted coding and a track record of using state of the art tools to enhance development speed, efficiency, and code quality
Familiar with database schema design principles and DB query languages such as SQL