This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Wells Fargo is seeking a Senior Software Engineer (Data Developer) with hands-on experience in SQL/NoSQL, PySpark, and Big Query, ETL, cloud-based data processing. Understand database architecture, cluster knowledge and database tuning. You will work on building and enhancing batch and streaming data pipelines that support analytics and customer‑centric decisioning. The role involves contributing to data modernization initiatives and supporting cloud‑native data platforms.
Job Responsibility:
Lead moderately complex initiatives and deliverables within technical domain environments
Contribute to large scale planning of strategies
Design, code, test, debug, and document for projects and programs associated with technology domain, including upgrades and deployments
Review moderately complex technical challenges that require an in-depth evaluation of technologies and procedures
Resolve moderately complex issues and lead a team to meet existing client needs or potential new clients needs while leveraging solid understanding of the function, policies, procedures, or compliance requirements
Collaborate and consult with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals
Lead projects and act as an escalation point, provide guidance and direction to less experienced staff
Develop and enhance batch and streaming data pipelines using PySpark, Spark, and Kafka
Participate in modernizing legacy data processes into cloud-based data platforms (GCP/AWS/Azure)
Build and maintain ETL/ELT workflows using enterprise ETL tools
Work with SQL and NoSQL databases to deliver clean, reliable, and well‑structured data assets
Support performance tuning, debugging, and optimization of data pipelines
Contribute to observability and monitoring of data jobs and workflows
Collaborate with Product Owners, Architects, and other engineering teams
Follow best practices in CI/CD, DevOps, and Agile development
Requirements:
4+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education
Strong SQL skills, including query tuning and optimization
Experience with PySpark, Spark, Hadoop, Hive or other Big Data technologies
Hands‑on experience with ETL/ELT tools (e.g., Informatica, Talend, etc.)
Knowledge of SQL and NoSQL databases (Oracle, PostgreSQL, MongoDB)
Experience working with Kafka or other streaming platforms
Exposure to GCP services such as BigQuery, Cloud Composer (or AWS/Azure equivalents)
Strong Linux/Unix command-line and troubleshooting skills
Familiarity with CI/CD tools such as GitHub, Jenkins, Gradle
Experience with monitoring tools like Splunk, Grafana, CloudWatch, AppDynamics, Elastic
Nice to have:
Strong analytical and problem‑solving ability
Good communication skills with the ability to work with cross‑functional teams
Collaborative, proactive, and willing to learn
Ability to work in a dynamic environment and adapt to changing priorities