CrawlJobs Logo

Hadoop Administrator

logic-loops.com Logo

Logic Loops

Location Icon

Location:
United States , Atlanta

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Responsibility:

  • Deploy new Hadoop infrastructure, Hadoop cluster upgrades, cluster maintenance, troubleshooting, capacity planning and resource optimization
  • Review, develop, and implement strategies that preserve the availability, stability, security and scalability of large Hadoop clusters
  • Interact with developers, architects and other operation team members to resolve job performance issues
  • Preparation of architecture, design and operational documentation
  • Participation in weekly on call rotation to provide operational support

Requirements:

  • Hands-on experience with the Hadoop stack (HDFS, MapReduce, Hbase, Pig, Hive, Oozie)
  • Extensive experience with Oracle 10g/11g databases and PL/SQL
  • Monitor and review Oracle database instances to identify potential maintenance and tuning issues
  • Expertise in systems administration, Linux tools, configuration management in a large-scale environment
  • Troubleshoot and debug Hadoop ecosystem runtime issues
  • Recover from node failures and troubleshoot common Hadoop cluster issues
  • Document all production scenarios, issues, and resolutions
  • Teamwork in providing hardware architectural guidance, planning, estimating cluster capacity, and creating roadmaps for Hadoop cluster deployment
  • Evaluation of Hadoop infrastructure requirements and design/deploy solutions (i.e., high availability big data clusters, etc.)
  • Expertise in performance tuning, system dump analysis, and storage capacity management
  • Experience with versioning, change control, problem management
  • Strong communication and technical writing skills
  • Result-oriented engineer with a laser-sharp delivery focus
  • Must-Have Primary Skill: Technical Lead-BIG Data ( BD)-Apache Hadoop (HDFS)-Hbase/Hive/Pig/Mahoot/Flume/Scoop/MapReduce/Yarn

Additional Information:

Job Posted:
December 10, 2025

Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Hadoop Administrator

Database Administrator II

We are seeking a highly skilled Cloudera Hadoop Administrator (DBA) with hands-o...
Location
Location
Salary
Salary:
100900.00 - 126100.00 USD / Year
acehardware.com Logo
ACE Hardware
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years of hands-on experience administering Cloudera Hadoop clusters
  • 2–3+ years of Databricks experience in production environments
  • 2+ years of Databricks administration experience on Azure (preferred)
  • Strong knowledge of Spark and Delta Lake architecture
  • Experience with IAM, Active Directory, and SSO integration
  • Familiarity with DevOps and CI/CD for data platforms
  • Deep understanding of Hadoop ecosystem: Hive, Impala, Spark, HDFS, YARN
  • Experience integrating data from DB2 to Hadoop/Databricks using tools like Sqoop or custom connectors
  • Scripting skills in Shell and/or Python for automation and system administration
  • Solid foundation in Linux/Unix system administration
Job Responsibility
Job Responsibility
  • Manage and support Cloudera Hadoop clusters and services (HDFS, YARN, Hive, Impala, Spark, Oozie, etc.)
  • Perform cluster upgrades, patching, performance tuning, capacity planning, and health monitoring
  • Secure the Hadoop platform using Kerberos, Ranger, or Sentry
  • Develop and maintain automation and monitoring scripts
  • Ingest data using tools such as Sqoop, NiFi, DEI Informatica, Qlik
  • Support release and deployment activities, including deployment of new across Dev/Test and Production environments
  • Integration of CI/CD pipelines (Git, or custom tooling) for automated code deployment
  • Ensuring minimal downtime, rollback capability, and alignment with change management policies
  • Maintain detailed release documentation, track changes in version control systems, and collaborate with development and operations teams to streamline deployment workflows
  • Administer and maintain Databricks workspaces in cloud environments (Azure, or GCP)
What we offer
What we offer
  • Incentive opportunities
  • Generous 401(k) retirement savings plan with matching and discretionary contributions
  • Comprehensive health coverage (medical, dental, vision and disability) & life insurance benefits
  • 21 days of vacation
  • Up to 6 paid holidays
  • Annual Ace Cares Week
  • 20 hours off work per year to volunteer
  • Opportunities to help Children’s Miracle Network Hospitals and the Ace Helpful Fund
  • On-site classes, facilitator-led courses, and a generous tuition assistance program
  • Frequent campus events (Employee Appreciation Week, vendor demos, cookouts, merchandise sales)
  • Fulltime
Read More
Arrow Right

Hadoop Administrator

Location
Location
United States , Tampa
Salary
Salary:
Not provided
opulent-soft.com Logo
OPULENTSOFT
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong in Big Data and Analytical skills – Min 3 Years Exp.
  • Experience in Hadoop cluster administration and configuration
  • Experience in Java and Unix based systems
  • Ability to co-ordinate with multiple technical teams, Business users and Customer
  • Strong communication
  • Strong troubleshooting skills
Read More
Arrow Right

Hadoop and Bigdata Administrator

You will work in a multi-functional role with a combination of expertise in Syst...
Location
Location
India , Indore, NOIDA
Salary
Salary:
Not provided
clear-trail.com Logo
ClearTrail
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Linux Administration
  • Experience in Python and Shell Scripting
  • Deploying and administering Hortonworks, Cloudera, Apache Hadoop/Spark ecosystem
  • Knowledge of Hadoop core components such as Zookeeper, Kafka, NIFI, HDFS, YARN, REDIS, SPARK etc.
  • Knowledge of HBASE Clusters
Job Responsibility
Job Responsibility
  • Deploying and administering Hortonworks, Cloudera, Apache Hadoop/Spark ecosystem
  • Installing Linux Operating System and Networking
  • Writing Unix SHELL/Ansible Scripting for automation
  • Maintaining core components such as Zookeeper, Kafka, NIFI, HDFS, YARN, REDIS, SPARK, HBASE etc.
  • Takes care of the day-to-day running of Hadoop clusters using Ambari/Cloudera manager/Other monitoring tools, ensuring that the Hadoop cluster is up and running all the time
  • Maintaining HBASE Clusters and capacity planning
  • Maintaining SOLR Cluster and capacity planning
  • Work closely with the database team, network team and application teams to make sure that all the big data applications are highly available and performing as expected
  • Manage KVM Virtualization environment
Read More
Arrow Right

Senior Systems Administrator

The role of the System Administrator includes supporting the implementation, tro...
Location
Location
United States , Laurel
Salary
Salary:
Not provided
wrench.io Logo
Wrench Technology
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Fourteen (10) years of experience of professional experience as a SA
  • Bachelor’s degree in Computer Science or related discipline from an accredited college or university is required
  • Five (5) years of additional SA experience may be substituted for a bachelor’s degree
  • Provide expert in troubleshooting IT systems
  • Provide thorough analysis and feedback to management and internal customers regarding escalated tickets
  • Extend support for dispatch system and hardware issues, remaining actively engaged in the resolution process
  • Handle configuration and management of UNIX and Windows (or other relevant) operating systems, including installation/loading of software, troubleshooting, maintaining integrity, configuring network components, and implementing enhancements to improve reliability and performance
  • NetApp experience required
  • Able to write the following scripting languages: Python, Ruby and Perl
Job Responsibility
Job Responsibility
  • Supporting the implementation, troubleshooting, and upkeep of Information Technology (IT) systems
  • Overseeing the IT system infrastructure and associated processes
  • Providing assistance for day-to-day operations, monitoring, and resolving issues related to client/server/storage/network devices, as well as mobile devices
  • Diagnosing and resolving problems
  • Configuring, and managing UNIX and Windows operating systems
  • Installing, and maintaining operating system software
  • Ensuring integrity, and configuring network components
  • Implementing enhancements to operating systems to enhance reliability and performance
  • Provides assistance with the installation, configuration, optimization, and administration of extensive Hadoop (Apache Accumulo) clusters dedicated to data-intensive computing tasks
Read More
Arrow Right

Bigdata SME

The selected candidate will work directly with the DriveOhio and IT team in rese...
Location
Location
United States , Columbus
Salary
Salary:
Not provided
oceanbluecorp.com Logo
Ocean Blue Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4-year college degree in computer science or related field with advanced study preferred
  • 5 yrs. experience in Hadoop HDFS Build and administration, certification preferred
  • 5 yrs. experience in working with, installing and setting up HBase,MongoDB, Maven, DocumentDB, Amazon DynamoDB, BigTable, Cassandra, or Druid
  • 5 yrs. experience working daily within an Agile team
  • 5 yrs. experience working within a paired programming environment
  • 7 yrs. in Software development as a developer
  • 7 yrs. experience in IT infrastructure deployment
  • 7 yrs. experience in Open Source software deployment
  • 7 yrs. experience in Web systems and application deployment
  • 7 yrs. experience in Requirements Gathering and Use Case development
Job Responsibility
Job Responsibility
  • Operate effectively as a big data SME with knowledge of the latest tools, applications and cloud infrastructures necessary to implement a powerful open source system
  • Operate as technical expert with intimate knowledge regarding all components that comprise the entire solution set
  • Serve as the SME on the host of applications and environments required to serve the users
  • Operate as SME in the implementation of Open Source Big data solutions from infrastructure to applications
  • Assist with developing the delivery Epics, Stories and Tasks that will comprise the solution set for BI
  • Operate as the cross technology and platform SME
  • Act as Open Source advocate and practitioner
  • Assist in the execution of the process operating as Agile evangelist and master
  • Mentor teams involved in developing highly technical solutions and promote culture that values input from all team members
  • Promote continuous learning and deliver value to the customer
Read More
Arrow Right
New

Big Data System Specialist

We are seeking a System Engineering Specialist to manage and optimise Big Data p...
Location
Location
Romania , Bucuresti
Salary
Salary:
Not provided
vodafone.com Logo
Vodafone
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • University degree with 3–5 years of experience in IT systems administration
  • Proven expertise in Big Data solutions (Hadoop ecosystem: HDFS, YARN, MapReduce, Machine Learning, Teradata)
  • Strong knowledge of Red Hat Enterprise Linux or Microsoft Windows operating systems
  • Proficiency in procedural programming and shell scripting (Python, KSH, Bash, Perl)
  • Familiarity with ITIL practices and advanced cyber security principles
  • Understanding of SOX compliance
  • Excellent problem-solving skills, structured delivery approach, and ability to manage multiple priorities under pressure
  • Strong documentation skills and customer-oriented mindset
  • Fluent in English (written and spoken)
Job Responsibility
Job Responsibility
  • Participate in all phases of IT projects, including defining technical requirements, evaluating solutions, developing scripts, and implementing platforms
  • Develop automation scripts and programs to extend functionality and streamline recurring activities
  • Design, configure, and implement solutions involving operating systems and Big Data technologies
  • Create and test disaster recovery solutions and maintain system resilience
  • Optimise existing architectures and perform upgrades to meet evolving business needs
  • Provide support for transformation projects and technology refresh programmes to enhance service availability and performance
  • Investigate and resolve performance incidents, ensuring minimal impact on client operations
  • Install and maintain operating systems and perform proactive maintenance using automated tools
  • Evaluate emerging technologies and contribute to proof-of-concept initiatives
  • Be available for exceptional situations requiring extended hours
What we offer
What we offer
  • Hybrid way of working
  • Medical and dental services
  • Life and hospitalization insurance
  • Dedicated employee phone subscription
  • Take control of your benefits and choose any of the below options: MEAL TICKETS/ PRIVATE PENSION/ VACATION VOUCHERS/ CULTURAL VOUCHERS within the budget
  • Special discounts for gyms and retailers
  • Annual Company Bonus
  • Loyalty Programme
  • Ongoing Education – we continuously invest in you to ensure you have everything needed to excel on the job and enhance your skills
  • You get to work with tried and trusted web-technology
  • Fulltime
Read More
Arrow Right
New

Big Data System Specialist

We are seeking a System Engineering Specialist to manage and support Big Data pl...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
vodafone.com Logo
Vodafone
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • University graduate with 3–5 years’ experience in IT infrastructure administration
  • Proven expertise in Big Data technologies including Hadoop (HDFS, YARN, MapReduce), Teradata, and Machine Learning
  • Proficient in Red Hat Enterprise Linux or Microsoft Windows operating systems
  • Skilled in procedural programming and shell scripting
  • Knowledgeable in ITIL, cyber security, and SOX principles
  • Strong documentation and problem-solving skills
  • Able to manage multiple priorities under pressure and meet tight deadlines
  • Fluent in English (written and spoken)
  • Customer-focused with the ability to work effectively in distributed teams
Job Responsibility
Job Responsibility
  • Manage the operational aspects of Big Data platforms
  • Participate in the full lifecycle of IT projects, including planning, analysis, and implementation
  • Define technical requirements and develop scripts/programmes using Bash, KSH, Python, Perl, and SQL
  • Design and implement disaster recovery solutions and conduct periodic testing
  • Optimise existing systems through architectural redesigns and software upgrades
  • Support transformation and technology refresh initiatives to enhance service availability and performance
  • Investigate and resolve performance incidents and client-reported software issues
  • Install and maintain operating systems and perform proactive maintenance using automated tools
  • Evaluate new technologies and conduct proof-of-concept exercises
What we offer
What we offer
  • Hybrid way of working: 2 days per week/ 8 per month
  • Medical and dental services
  • Life and hospitalization insurance
  • Dedicated employee phone subscription
  • Take control of your benefits and choose any of the below options: MEAL TICKETS/ PRIVATE PENSION/VACATION VOUCHERS/ CULTURAL VOUCHERS within the budget
  • Special discounts for gyms and retailers
  • Annual Company Bonus
  • Ongoing Education – we continuously invest in you to ensure you have everything needed to excel on the job and enhance your skills
  • You get to work with tried and trusted web-technology
  • We let you write your own story by planning vacations: go for a trip, experience new things, have fun and enjoy your 23 days off
  • Fulltime
Read More
Arrow Right

Splunk Consultant

DeployPartners deliver high-quality Service Assurance Solutions expertise throug...
Location
Location
Australia , North Sydney
Salary
Salary:
Not provided
deploypartners.com Logo
DeployPartners
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ Years experience in installing and configuring Splunk
  • An understanding of the components of a larger-scale Splunk implementation
  • Extensive knowledge of Operating systems, server administration, routers, switches, firewalls, load balancers, fault management, email servers, VM Platforms, Cloud Services, Hadoop, IPS, IDS and TCP/IP
  • Experience with both Linux and Windows operating systems: comfortable with the command line interface
  • Working knowledge or recent experience with scripting languages (bash, pearl), regular expressions, application development (java, python, .NET) SQL
  • Experience in successful startups in the area of system management
  • Ability to quickly explore, examine and understand complex problems and how these relate to the customer's business
  • Able to quickly understand and interpret customer problems and navigate complex organisations
  • Effective at communicating clearly to technical and business audiences
  • Well-organised with a healthy sense of urgency and ability to set, communicate and meet aggressive goals
Job Responsibility
Job Responsibility
  • Responsible for system installations, configuration, testing and design
  • Estimating required project effort and durations
  • Prepare and submit project weekly reports on work executed
  • Prepare and create clear concise and professional project documentation
  • Assisting in pre-sales activities, including responding to RFP, RFQ and SOW's
  • Provide customer support on Splunk projects and assist with tickets logged and live and development systems when not on customer site
  • Fulltime
Read More
Arrow Right