This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
We are looking for a highly skilled and passionate BigData Platform Admin, who acts as a crucial liaison between the Hadoop admin team and various application development teams. The role is responsible for ensuring the optimal performance, stability, and future readiness of the Hadoop platform, focusing on strategic oversight rather than day-to-day administrative tasks. As a strategist will facilitate communication, drive best practice, assess technical impacts of the platform changes, and contribute to the overall health and efficiency of the Hadoop ecosystem.
Job Responsibility:
Stakeholder Unification: Serve as a single point of contact and unified stakeholder for all Hadoop-related concerns
Platform Upgrade Management: Review and assess upcoming Hadoop platform upgrades
Technical Feature and Library Evaluation: Identify and evaluate new technical features and libraries within the Hadoop ecosystem
Cluster Health and Optimization: Monitor overall cluster health, performance metrics, and resource utilization
Resource Management and Housekeeping: Oversee and manage the allocation of cluster resources
Best Practices and Overall Excellence: Define, document and promote best practices for Hadoop application development
Solution Proposal and Innovation: Research and propose suitable technical solutions to address emerging business needs
Requirements:
BigData Platform Admin & Strategist
Bachelor's or Master's degree in Computer Science, Engineering or a related field
5+ years of experience in big data environment, with a focus on Hadoop
Proven experience in a technical leadership or architect role, working closely with both operations and development teams
Experience with distributed systems, data processing frameworks (e.g. Spark, Hive) and data warehousing concepts
Deep understanding of Hadoop ecosystem components (HDFS, YARN, MapReduce, Hive, Spark, Kafka, Etc.)
Strong understanding of Spark architecture and core concepts
Proficiency in Linux scripting for automation and system management
Basic to intermediate proficiency in Python/Scala for scripting and data manipulation
Experience with monitoring tools (eg. Grafana, Prometheus) and logging frameworks
Awareness of various data engineering solutions and consumption tools within the big data landscape
Strong understanding of security best practices in a big data environment
Nice to have:
Familiarity with the cloud platforms (eg. AWS, Azure, GCP) and containerization technologies (eg. Dockets, Kebernetes) is a plus