This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Wells Fargo is seeking a Lead Software Engineer to drive innovation and technical excellence within the CTR space. This role will lead the design and optimization of scalable data platforms using technologies such as Apache Spark. The engineer will ensure robust observability and performance monitoring. They will also oversee workflow orchestration and DevOps practices, enabling efficient collaboration and delivery across teams. Strong partnership with business stakeholders is essential to deliver timely, impactful solutions to complex operational challenges.
Job Responsibility:
Lead complex technology initiatives including those that are companywide with broad impact
Act as a key participant in developing standards and companywide best practices for engineering complex and large-scale technology solutions for technology engineering disciplines
Design, develop, and maintain ETL/ELT pipelines. Automate workflows for ingestion, transformation, and loading of data
Integrate data from multiple sources into data lakes or warehouses. Ensure optimal storage strategies balancing cost and performance
Cleanse, normalize, and enrich raw data for analytics. Implement robust data quality checks and governance frameworks
Monitor and optimize data workflows for scalability and efficiency
Make decisions in developing standard and companywide best practices for engineering and technology solutions requiring understanding of industry best practices and new technologies, influencing and leading technology team to meet deliverables and drive new initiatives
Collaborate and consult with key technical experts, senior technology team, and external industry groups to resolve complex technical issues and achieve goals
Lead projects, teams, or serve as a peer mentor
Requirements:
5+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education
5+ years of experience building data pipelines on Google Cloud Platform (GCP) using Cloud Dataflow, Cloud Dataproc, Cloud Composer, BigQuery, Cloud Storage (GCS), Pub/Sub, or similar GCP-native frameworks
5+ years of experience with workflow orchestration using Cloud Composer
5+ years of strong development proficiency in SQL and Python for data engineering tasks
2+ years of experience with monitoring and observability using Cloud Monitoring, Grafana, or equivalent tools
Solid understanding of data modeling, ETL/ELT processes, and data governance best practices
Nice to have:
Experience with Data Fusion, Dataplex and Dataprep would be a plus
Extensive hands-on experience with BigQuery, Cloud Storage, Cloud SQL, Cloud Dataflow and Cloud Composer
Experience with performance monitoring and pipeline observability using GCP-native tools or third-party solutions
Proficiency with GCP resource management and workload optimization and cost control strategies
Familiarity with BigQuery for large-scale analytics and schema evolution
Knowledge of data lakehouse patterns using Apache Iceberg or similar technologies
Well-versed in PySpark, Pandas, and Polars for data processing
Experience using Jira and Confluence for agile workflows
Experience with interactive notebook environments (Jupyter, Zepplin, databricks or similar) for data engineering development