This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
MongoDB Database Management: Designing, implementing, and managing MongoDB database architectures, including schema design, indexing, replication, sharding, and performance optimization for large-scale data
Data Pipeline Development: Building and maintaining robust ETL/ELT (Extract, Transform, Load) pipelines for ingesting, transforming, and loading structured and unstructured data from diverse sources into MongoDB and other data platforms
Performance Optimization: Optimizing MongoDB queries, aggregation pipelines, and overall database performance to ensure efficient data processing and retrieval
Data Integrity and Security: Ensuring data integrity, quality, and security by implementing appropriate validation, monitoring, access controls, and encryption measures within MongoDB and related systems
Collaboration and Integration: Working closely with application developers, data scientists, data analysts, and DevOps teams to understand data requirements, integrate MongoDB with other systems and cloud services (e.g., GCP, AWS, Azure), and support data-driven applications
Troubleshooting and Monitoring: Monitoring data pipeline performance, troubleshooting issues, and implementing solutions to ensure data reliability and availability
Documentation: Creating and maintaining documentation for data pipelines, database schemas, and data engineering processes
Requirements:
Primary Skill: Data Engineer
Secondary: Mongo DB
Experience: Minimum 10 years
Strong MongoDB Expertise: In-depth knowledge of MongoDB's features, including document modeling, aggregation framework, indexing, sharding, and administration
Programming Proficiency: Strong programming skills in languages like Python, Java, or Node.js, particularly with MongoDB drivers and related libraries
Data Pipeline Tools: Experience with data pipeline tools and technologies such as Apache Spark, Airflow, Kafka, or cloud-native data services
Cloud Platform Experience: Familiarity with cloud platforms (e.g., Google Cloud Platform, AWS, Azure) and their data-related services
Data Modeling and Design: Ability to design efficient and scalable data models for NoSQL databases
Problem-Solving and Analytical Skills: Strong aptitude for troubleshooting data-related issues and optimizing data systems
Communication and Teamwork: Excellent communication and collaboration skills to work effectively with cross-functional teams