This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Build Rest APIs, Bulk APIs, SOAP APIs to integrate different systems
Strong experience with designing schema and writing postgresSQL queries and SQL queries in general
Solid understanding of AWS Lambda, S3 bucket upload, EMR and EC2
Understanding of Object model in Salesforce applications
Design, develop, and optimize data pipelines/workflows using Databricks (Spark, Delta Lake) for ingestion, transformation, and processing of large-scale data
Build ETL pipeline with Informatica or other ETL tools
Support data governance and metadata management
Experience using static code scan tools
Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
Identify and resolve complex challenges
Adhere to best practices for coding, testing, and designing reusable code/component
Analyze business and technical requirements and begin translating them into simple development tasks
Execute unit and integration tests, and contribute to maintaining software quality
Identify and fix bugs and defects during development or testing phases
Contribute to the maintenance and support of applications by monitoring performance and reporting issues
Use CI/CD pipelines as part of DevOps practices and assist in the release process
Requirements:
Master’s/Bachelor’s degree and 4 to 8 years of Computer Science, IT or related field experience
Bachelor’s or master’s degree in computer science, Data Science, or a related field
Hands on experience with big data technologies and platforms, such as Databricks, REST APIs, Bulk APIs, Apache Spark (PySpark, SparkSQL), python for workflow orchestration, performance tuning on big data processing
Proficiency in data analysis tools (eg. SQL)
Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores
Strong programming skills in Python, PySpark, and SQL
Familiarity with Informatica and/or other ETL tools
Experience working with cloud data services (Azure, AWS, or GCP)
Strong understanding of data modeling, entity relationships
Excellent problem-solving and analytical skills
Strong communication and interpersonal abilities
High attention to detail and commitment to quality
Ability to prioritize tasks and work under pressure
Team-oriented with a proactive and collaborative mindset
Willingness to mentor junior developers and promote best practices
Adaptable to changing project requirements and evolving technology
Nice to have:
Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing
Knowledge of Python/R, REST APIs, SQL, Databricks, cloud data platforms
Strong understanding of data governance frameworks, tools, and best practices
Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA)