This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Join our team as a Data Developer and play a key role in connecting data analytics with software engineering. You will help design, develop, and optimize cutting-edge data infrastructure, with a focus on Databricks and cloud environments, while supporting and improving existing applications. You will collaborate closely with business stakeholders, IT teams, and data engineers to deliver high-quality, scalable, and efficient data solutions.
Job Responsibility:
Collaborate with business stakeholders and IT teams to understand, document, and design data warehouse processes
Contribute to the definition, development, and implementation of data warehouse solutions
Design, develop, test, optimize, and deploy ETL pipelines and related data transformations
Create and maintain data mapping logic to transfer content from various source systems into the data warehouse
Plan and coordinate ETL and database rollouts alongside project teams
Provide support, maintenance, troubleshooting, and resolution for ETL processes
Implement Data Products in a Data Mesh architecture and optimize pipelines for production-ready workflows
Administer and manage data environments, tech stack, and traditional databases
Implement automation and CI/CD practices to ensure efficient development and deployment
Participate in diagnosing and solving complex data-related problems, documenting configurations, and maintaining best practices
Collaborate effectively with cross-functional teams to deliver high-quality data solutions that meet business requirements
Requirements:
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field
Minimum 3 years of experience in Data Engineering, Data DevOps, or related roles, preferably in banking or telecommunications
Advanced expertise in SQL, PL/SQL, Python, PySpark and ETL tools such as ODI or similar
Experience in data modeling, source system analysis, and database design
Skilled in designing, developing, testing, optimizing, and deploying ETL pipelines (ODI packages, Databricks jobs, Oracle stored procedures)
Familiarity with data warehouse concepts, data cataloging, profiling, and mapping
Experience with databases such as Oracle, PostgreSQL, or similar large-scale systems
Knowledge of data visualization and exploration tools
Understanding of microservices architectures, cloud solutions (AWS, Databricks), and CI/CD practices (Git, Jenkins, GitHub Actions, Ansible/Terraform)
Professional level of English (spoken and written)
Fast learner, proactive, and eager to explore new technologies
Nice to have:
Familiarity with Agile–Scrum methodology is a plus