CrawlJobs Logo

Hadoop Administrator

opulent-soft.com Logo

OPULENTSOFT

Location Icon

Location:
United States , Tampa

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Requirements:

  • Strong in Big Data and Analytical skills – Min 3 Years Exp.
  • Experience in Hadoop cluster administration and configuration
  • Experience in Java and Unix based systems
  • Ability to co-ordinate with multiple technical teams, Business users and Customer
  • Strong communication
  • Strong troubleshooting skills

Additional Information:

Job Posted:
January 02, 2026

Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Hadoop Administrator

Hadoop Administrator

Location
Location
United States , Atlanta
Salary
Salary:
Not provided
logic-loops.com Logo
Logic Loops
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Hands-on experience with the Hadoop stack (HDFS, MapReduce, Hbase, Pig, Hive, Oozie)
  • Extensive experience with Oracle 10g/11g databases and PL/SQL
  • Monitor and review Oracle database instances to identify potential maintenance and tuning issues
  • Expertise in systems administration, Linux tools, configuration management in a large-scale environment
  • Troubleshoot and debug Hadoop ecosystem runtime issues
  • Recover from node failures and troubleshoot common Hadoop cluster issues
  • Document all production scenarios, issues, and resolutions
  • Teamwork in providing hardware architectural guidance, planning, estimating cluster capacity, and creating roadmaps for Hadoop cluster deployment
  • Evaluation of Hadoop infrastructure requirements and design/deploy solutions (i.e., high availability big data clusters, etc.)
  • Expertise in performance tuning, system dump analysis, and storage capacity management
Job Responsibility
Job Responsibility
  • Deploy new Hadoop infrastructure, Hadoop cluster upgrades, cluster maintenance, troubleshooting, capacity planning and resource optimization
  • Review, develop, and implement strategies that preserve the availability, stability, security and scalability of large Hadoop clusters
  • Interact with developers, architects and other operation team members to resolve job performance issues
  • Preparation of architecture, design and operational documentation
  • Participation in weekly on call rotation to provide operational support
Read More
Arrow Right

Database Administrator II

We are seeking a highly skilled Cloudera Hadoop Administrator (DBA) with hands-o...
Location
Location
Salary
Salary:
100900.00 - 126100.00 USD / Year
acehardware.com Logo
ACE Hardware
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years of hands-on experience administering Cloudera Hadoop clusters
  • 2–3+ years of Databricks experience in production environments
  • 2+ years of Databricks administration experience on Azure (preferred)
  • Strong knowledge of Spark and Delta Lake architecture
  • Experience with IAM, Active Directory, and SSO integration
  • Familiarity with DevOps and CI/CD for data platforms
  • Deep understanding of Hadoop ecosystem: Hive, Impala, Spark, HDFS, YARN
  • Experience integrating data from DB2 to Hadoop/Databricks using tools like Sqoop or custom connectors
  • Scripting skills in Shell and/or Python for automation and system administration
  • Solid foundation in Linux/Unix system administration
Job Responsibility
Job Responsibility
  • Manage and support Cloudera Hadoop clusters and services (HDFS, YARN, Hive, Impala, Spark, Oozie, etc.)
  • Perform cluster upgrades, patching, performance tuning, capacity planning, and health monitoring
  • Secure the Hadoop platform using Kerberos, Ranger, or Sentry
  • Develop and maintain automation and monitoring scripts
  • Ingest data using tools such as Sqoop, NiFi, DEI Informatica, Qlik
  • Support release and deployment activities, including deployment of new across Dev/Test and Production environments
  • Integration of CI/CD pipelines (Git, or custom tooling) for automated code deployment
  • Ensuring minimal downtime, rollback capability, and alignment with change management policies
  • Maintain detailed release documentation, track changes in version control systems, and collaborate with development and operations teams to streamline deployment workflows
  • Administer and maintain Databricks workspaces in cloud environments (Azure, or GCP)
What we offer
What we offer
  • Incentive opportunities
  • Generous 401(k) retirement savings plan with matching and discretionary contributions
  • Comprehensive health coverage (medical, dental, vision and disability) & life insurance benefits
  • 21 days of vacation
  • Up to 6 paid holidays
  • Annual Ace Cares Week
  • 20 hours off work per year to volunteer
  • Opportunities to help Children’s Miracle Network Hospitals and the Ace Helpful Fund
  • On-site classes, facilitator-led courses, and a generous tuition assistance program
  • Frequent campus events (Employee Appreciation Week, vendor demos, cookouts, merchandise sales)
  • Fulltime
Read More
Arrow Right
New

Hadoop and Bigdata Administrator

You will work in a multi-functional role with a combination of expertise in Syst...
Location
Location
India , Indore, NOIDA
Salary
Salary:
Not provided
clear-trail.com Logo
ClearTrail
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Linux Administration
  • Experience in Python and Shell Scripting
  • Deploying and administering Hortonworks, Cloudera, Apache Hadoop/Spark ecosystem
  • Knowledge of Hadoop core components such as Zookeeper, Kafka, NIFI, HDFS, YARN, REDIS, SPARK etc.
  • Knowledge of HBASE Clusters
Job Responsibility
Job Responsibility
  • Deploying and administering Hortonworks, Cloudera, Apache Hadoop/Spark ecosystem
  • Installing Linux Operating System and Networking
  • Writing Unix SHELL/Ansible Scripting for automation
  • Maintaining core components such as Zookeeper, Kafka, NIFI, HDFS, YARN, REDIS, SPARK, HBASE etc.
  • Takes care of the day-to-day running of Hadoop clusters using Ambari/Cloudera manager/Other monitoring tools, ensuring that the Hadoop cluster is up and running all the time
  • Maintaining HBASE Clusters and capacity planning
  • Maintaining SOLR Cluster and capacity planning
  • Work closely with the database team, network team and application teams to make sure that all the big data applications are highly available and performing as expected
  • Manage KVM Virtualization environment
Read More
Arrow Right
New

Senior Systems Administrator

The role of the System Administrator includes supporting the implementation, tro...
Location
Location
United States , Laurel
Salary
Salary:
Not provided
wrench.io Logo
Wrench Technology
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Fourteen (10) years of experience of professional experience as a SA
  • Bachelor’s degree in Computer Science or related discipline from an accredited college or university is required
  • Five (5) years of additional SA experience may be substituted for a bachelor’s degree
  • Provide expert in troubleshooting IT systems
  • Provide thorough analysis and feedback to management and internal customers regarding escalated tickets
  • Extend support for dispatch system and hardware issues, remaining actively engaged in the resolution process
  • Handle configuration and management of UNIX and Windows (or other relevant) operating systems, including installation/loading of software, troubleshooting, maintaining integrity, configuring network components, and implementing enhancements to improve reliability and performance
  • NetApp experience required
  • Able to write the following scripting languages: Python, Ruby and Perl
Job Responsibility
Job Responsibility
  • Supporting the implementation, troubleshooting, and upkeep of Information Technology (IT) systems
  • Overseeing the IT system infrastructure and associated processes
  • Providing assistance for day-to-day operations, monitoring, and resolving issues related to client/server/storage/network devices, as well as mobile devices
  • Diagnosing and resolving problems
  • Configuring, and managing UNIX and Windows operating systems
  • Installing, and maintaining operating system software
  • Ensuring integrity, and configuring network components
  • Implementing enhancements to operating systems to enhance reliability and performance
  • Provides assistance with the installation, configuration, optimization, and administration of extensive Hadoop (Apache Accumulo) clusters dedicated to data-intensive computing tasks
Read More
Arrow Right

Splunk Consultant

DeployPartners deliver high-quality Service Assurance Solutions expertise throug...
Location
Location
Australia , North Sydney
Salary
Salary:
Not provided
deploypartners.com Logo
DeployPartners
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ Years experience in installing and configuring Splunk
  • An understanding of the components of a larger-scale Splunk implementation
  • Extensive knowledge of Operating systems, server administration, routers, switches, firewalls, load balancers, fault management, email servers, VM Platforms, Cloud Services, Hadoop, IPS, IDS and TCP/IP
  • Experience with both Linux and Windows operating systems: comfortable with the command line interface
  • Working knowledge or recent experience with scripting languages (bash, pearl), regular expressions, application development (java, python, .NET) SQL
  • Experience in successful startups in the area of system management
  • Ability to quickly explore, examine and understand complex problems and how these relate to the customer's business
  • Able to quickly understand and interpret customer problems and navigate complex organisations
  • Effective at communicating clearly to technical and business audiences
  • Well-organised with a healthy sense of urgency and ability to set, communicate and meet aggressive goals
Job Responsibility
Job Responsibility
  • Responsible for system installations, configuration, testing and design
  • Estimating required project effort and durations
  • Prepare and submit project weekly reports on work executed
  • Prepare and create clear concise and professional project documentation
  • Assisting in pre-sales activities, including responding to RFP, RFQ and SOW's
  • Provide customer support on Splunk projects and assist with tickets logged and live and development systems when not on customer site
  • Fulltime
Read More
Arrow Right

Business Intelligence Developer

The CTI Enterprise Analytical Services (EAS) organization is actively recruiting...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience with Business Intelligence tool or data federation tools such as Starburst developer or administrator
  • 3+ years of Linux, Shell scripting, Ansible experience
  • 8+ years overall IT experience
  • Knowledge of the Hadoop ecosystem with experience in Hive, Spark, etc. is a plus
  • Knowledge of Java or any programming language is a plus
  • Good interpersonal skills with excellent communication skills - written and spoken English
  • Able to interact with client projects in cross-functional teams
  • Good team player interested in sharing knowledge and cross-training other team members and shows interest in learning new technologies and products
  • 5+ years of hands-on experience in setting up security (authentication and authorization) for Business Intelligence or data federation products
  • Experience with container technologies, Kubernetes, and cloud architectures, including some exposure to public cloud platforms such as AWS, GCP
Job Responsibility
Job Responsibility
  • Deliver the tooling and capabilities needed to enable data & analytics services such as Starburst, Tableau on massive, distributed data sets
  • Understand Engineering needs including those required to build, maintain, and operate the system through all phases of its life
  • Create and maintain continuous integration and deployment processes including testing and monitoring to ensure the solution is reliable and measurable
  • Take full ownership of designing solutions, and building blueprints, prototypes, and frameworks to drive enablement of new capabilities
  • Collaborate with cross-functional teams to build a portfolio of capabilities for recommendation and use in new product developments
  • Publish best practices, configuration recommendations, design patterns, tool/technology selection methodologies, and playbooks for Engineering and user communities
  • Collaborate with cross-functional Engineering teams to build a portfolio of capabilities to recommend and use in analytical product development across Citi lines of Businesses
  • Enable Hybrid cloud implementation along with security for Business Intelligence products
  • Enable Business Intelligence products on external Cloud platforms as SaaS or PaaS solutions and integrate with various Cloud and on-prem data sources
  • Build reusable security and deployment framework for Business Intelligence services enabled on Cloud and on-prem
  • Fulltime
Read More
Arrow Right

Bigdata Support Lead Engineer

The Lead Data Analytics analyst is responsible for managing, maintaining, and op...
Location
Location
India , Bengaluru; Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10+ years of total IT experience
  • 5+ years of experience in supporting Hadoop (Cloudera)/big data technologies
  • 5+ years of experience in public cloud infrastructure (AWS or GCP)
  • Experience with Kubernetes and cloud-native technologies
  • Experience with all aspects of DevOps (source control, continuous integration, deployments, etc.)
  • Advanced knowledge of the Hadoop ecosystem and Big Data technologies
  • Hands-on experience with the Hadoop eco-system (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr)
  • Knowledge of troubleshooting techniques for Hive, Spark, YARN, Kafka, and HDFS
  • Advanced Linux system administration and scripting skills (Shell, Python)
  • Experience on designing and developing Data Pipelines for Data Ingestion or Transformation using Spark with Java or Scala or Python
Job Responsibility
Job Responsibility
  • Lead day to day operation and support for Cloudera Hadoop ecosystem components (HDFS, YARN, Hive, Impala, Spark, HBase, etc)
  • Troubleshoot issues related to data ingestion, job failures, performance degradation and service unavailability
  • Monitor cluster health using Cloudera Manager and respond to alerts, logs, and metrics
  • Collaborate with engineering teams to analyze root causes and implement preventive measures
  • Collaborate patching, service restarts, failovers and rolling restarts for cluster maintenance
  • Assist in user onboarding, access control and issues in accessing the cluster services
  • Contribute to documentation for knowledge base
  • Work on data recovery, replication, and backup support tasks
  • Responsible for moving all legacy workloads to cloud platform
  • Ability to research and assess open-source technologies, public cloud tech stack (AWS/GCP) components to recommend and integrate into the design and implementation
  • Fulltime
Read More
Arrow Right

Principal Site Reliability Engineer

Location
Location
United States , Ft. Meade
Salary
Salary:
Not provided
cipherlogix.com Logo
CipherLogix
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Fourteen (14) years experience in software development/engineering, including requirements analysis, software development, installation, integration, evaluation, enhancement, maintenance, testing, and problem diagnosis/resolution
  • Ten (10) years experience in system engineering/architecture
  • Ten (10) years experience working with products that support highly distributed, massively parallel computation needs such as Hbase, Hadoop, CloudBase/Acumulo, Big Table, Cassandra, Scality etc
  • At least ten (10) years experience writing software scripts using scripting languages such as Perl, Python, or Ruby for software automation
  • At least four (4) years experience managing and monitoring large Cloud System (>200 nodes). Cloud Systems Administrator or Developer Certification
  • Experience in performing and providing technical direction for the development, engineering, interfacing, integration, and testing of complete hardware/software systems to include monitoring technical health of a system, improving organizational processes, implementation of postmortem (failure) analysis and incident management
  • Ten (10) years experience in the cleared environment
  • Ten (10) years demonstrated experience developing software for one of the following: Windows, UNIX, or Linux OS
  • Knowledge and experience with developing distributed storage routing and querying algorithms
  • Experience in developing documentation required to support a program’s technical issues and training situations
  • Fulltime
Read More
Arrow Right
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.