CrawlJobs Logo

Hadoop Engineer

realign-llc.com Logo

Realign

Location Icon

Location:
United States , Bellevue

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

180000.00 USD / Year

Job Description:

Role - Hadoop Engineer; Experience - 6+

Job Responsibility:

  • Hadoop administration, Cloud Era, AWS, Hortonworks
  • Supporting HDInsight product team in Hadoop Admin support
  • Helping customers in providing resolutions to end customers by resolving the ICM
  • Responsibilities would include – Run prototype to migrate Big-data workloads Performance tuning for Spark, Hive, and Hadoop jobs
  • Troubleshoot production issues, identify the root cause, and provide mitigation
  • Build tools and services to improve debuggability/supportability
  • Monitoring cluster health for top customers and providing support

Requirements:

  • Must Have Technical/Functional Skills: Digital: Big Data and Hadoop Ecosystems, Digital: Kafka, Digital : HBase
  • Required Skills: DEVOPS ENGINEER SENIOR EMAIL SECURITY ENGINEER

Additional Information:

Job Posted:
March 19, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Hadoop Engineer

Hadoop SRE Engineer - VP

The Engineering Lead Analyst is a senior level position responsible for leading ...
Location
Location
United States , Tampa, Florida; Irving, Texas
Salary
Salary:
113840.00 - 170760.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6-10 years of relevant experience in an Engineering role
  • Experience working in Financial Services or a large complex and/or global environment
  • Project Management experience
  • Comprehensive knowledge of design metrics, analytics tools, benchmarking activities and related reporting to identify best practices
  • Demonstrated analytic/diagnostic skills
  • Ability to work in a matrix environment and partner with virtual teams
  • Ability to work independently, multi-task, and take ownership of various parts of a project or initiative
  • Ability to work under pressure and manage to tight deadlines or unexpected changes in expectations or requirements
  • Proven track record of operational process change and improvement
  • Proven track record of designing highly available platforms and services supporting various types of workloads
Job Responsibility
Job Responsibility
  • Serve as a technology subject matter expert for internal and external stakeholders
  • Provide direction for all firm mandated controls and compliance initiatives
  • Create a technology domain roadmap for Cloudera Hadoop Platform and Hadoop on Google Cloud Platform
  • Ensure that all integration of functions meet business goals
  • Define necessary system enhancements to deploy new products and process enhancements
  • Recommend product customization for system integration
  • Identify problem causality, business impact and root causes
  • Exhibit knowledge of how own specialty area contributes to the business and apply knowledge of competitors, products and services
  • Advise or mentor junior team members
  • Impact the engineering function by influencing decisions through advice, counsel or facilitating services
What we offer
What we offer
  • medical, dental & vision coverage
  • 401(k)
  • life, accident, and disability insurance
  • wellness programs
  • paid time off packages, including planned time off (vacation), unplanned time off (sick leave), and paid holidays
  • Fulltime
Read More
Arrow Right

Data Engineer

PulsePoint Data Engineering team plays a key role in our technology company that...
Location
Location
Salary
Salary:
Not provided
pulsepoint.com Logo
PulsePoint
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of data engineering experience
  • Strong skills in and current experience with SQL and Python
  • Strong recent Spark experience (3+ years)
  • Experience working in on-prem environments
  • Hadoop and Hive experience
  • Proficiency in Linux
  • Strong understanding of RDBMS and query optimization
  • Passion for engineering and computer science around data
  • East Coast U.S. hours 9am-6pm EST
  • Notice period needs to be less than 2 months (or 2 months max)
Job Responsibility
Job Responsibility
  • Design, build, and maintain reliable and scalable enterprise-level distributed transactional data processing systems for scaling the existing business and supporting new business initiatives
  • Optimize jobs to utilize Kafka, Hadoop, Presto, Spark, and Kubernetes resources in the most efficient way
  • Monitor and provide transparency into data quality across systems (accuracy, consistency, completeness, etc)
  • Increase accessibility and effectiveness of data (work with analysts, data scientists, and developers to build/deploy tools and datasets that fit their use cases)
  • Collaborate within a small team with diverse technology backgrounds
  • Provide mentorship and guidance to junior team members
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer – Dublin (Hybrid) Contract Role | 3 Days Onsite. We are see...
Location
Location
Ireland , Dublin
Salary
Salary:
Not provided
solasit.ie Logo
Solas IT Recruitment
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience as a Data Engineer working with distributed data systems
  • 4+ years of deep Snowflake experience, including performance tuning, SQL optimization, and data modelling
  • Strong hands-on experience with the Hadoop ecosystem: HDFS, Hive, Impala, Spark (PySpark preferred)
  • Oozie, Airflow, or similar orchestration tools
  • Proven expertise with PySpark, Spark SQL, and large-scale data processing patterns
  • Experience with Databricks and Delta Lake (or equivalent big-data platforms)
  • Strong programming background in Python, Scala, or Java
  • Experience with cloud services (AWS preferred): S3, Glue, EMR, Redshift, Lambda, Athena, etc.
Job Responsibility
Job Responsibility
  • Build, enhance, and maintain large-scale ETL/ELT pipelines using Hadoop ecosystem tools including HDFS, Hive, Impala, and Oozie/Airflow
  • Develop distributed data processing solutions with PySpark, Spark SQL, Scala, or Python to support complex data transformations
  • Implement scalable and secure data ingestion frameworks to support both batch and streaming workloads
  • Work hands-on with Snowflake to design performant data models, optimize queries, and establish solid data governance practices
  • Collaborate on the migration and modernization of current big-data workloads to cloud-native platforms and Databricks
  • Tune Hadoop, Spark, and Snowflake systems for performance, storage efficiency, and reliability
  • Apply best practices in data modelling, partitioning strategies, and job orchestration for large datasets
  • Integrate metadata management, lineage tracking, and governance standards across the platform
  • Build automated validation frameworks to ensure accuracy, completeness, and reliability of data pipelines
  • Develop unit, integration, and end-to-end testing for ETL workflows using Python, Spark, and dbt testing where applicable
Read More
Arrow Right

Data Engineering Lead

Data Engineering Lead a strategic professional who stays abreast of developments...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
  • 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
  • Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
Job Responsibility
Job Responsibility
  • Strategic Leadership: Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy
  • Team Management: Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement
  • Architecture and Design: Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data
  • Technology Selection and Implementation: Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data
  • Performance Optimization: Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness, ensuring optimal access to global wealth data
  • Collaboration: Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions that support investment strategies and client reporting
  • Data Governance: Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations, particularly around sensitive financial data
  • Fulltime
Read More
Arrow Right

Sr Data Engineer

Resource Informatics Group, Inc. is actively seeking a skilled Senior Data Engin...
Location
Location
United States , Irving
Salary
Salary:
Not provided
rigusinc.com Logo
Resource Informatics Group
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related fields
  • Strong expertise in data engineering and cloud-based solutions
  • 6+ years of experience in data engineering, architecture, and implementation of large-scale data solutions
  • Proficiency in designing and implementing data models, data structures, and algorithms
  • Advanced knowledge of SQL and NoSQL databases
  • Demonstrated expertise in optimizing data pipelines and improving data reliability, efficiency, and quality
  • Excellent problem-solving capabilities with a keen attention to detail
  • Strong communication and collaboration skills, with the ability to work effectively across diverse teams
  • Relevant certifications in cloud technologies (Azure, AWS, or GCP) advantageous
  • Master’s in Data Science or Computer Science or foreign equivalent, plus 6+ years of experience, OR Bachelor’s in Computer Science, Information Technology, or Electronics and Communication Engineering or foreign equivalent
Job Responsibility
Job Responsibility
  • Develop and execute ETL processes for data extraction, transformation, and loading into warehouses and data lakes
  • Architect data warehousing solutions using Azure Synapse Analytics for efficient querying and reporting
  • Optimize query performance, data processing speed, and resource utilization within Azure environments
  • Construct seamless data pipelines across Azure services utilizing Azure Data Factory, Databricks, and SQL Server Integration Services
  • Collaborate with stakeholders, including data scientists and analysts, to understand data requirements and deliver effective solutions
  • Manage large data volumes leveraging the Hadoop ecosystem for diverse source collection and loading
  • Design, maintain, and optimize data processing jobs using Hadoop MapReduce, Spark, and Hive, with coding in Java or Python for custom applications
  • Monitor job and cluster performance using tools like Ambari and custom monitoring scripts, scaling and maintaining Hadoop clusters and Azure data services
  • Ensure adherence to data security measures and governance standards
  • Integrate cross-cloud data with AWS and GCP services
  • Fulltime
Read More
Arrow Right

Data Engineer

Data Engineer to analyze data engineering problems and develop, build and manage...
Location
Location
United States , Woonsocket
Salary
Salary:
106038.00 - 140000.00 USD / Year
https://www.cvshealth.com/ Logo
CVS Health
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master’s degree (or foreign equivalent) in Information Technology, Computer Science, Computer Information Systems, Engineering, or a related field
  • 2 years of experience in the job offered or a related occupation
  • 2 years of experience with CI/CD, Jenkins, GIT, or DevOps
  • 2 years of experience programming in Python, R, or SQL
  • 2 years of experience with Spark, Airflow, Kafka, Hbase, Pig, MySQL, or NoSQL
  • 2 years of experience with Data warehouse technologies: Oracle, Teradata, or DB2
  • 2 years of experience with Visualization tools, including Tableau
  • 2 years of experience with Software development for enterprise or web applications
  • 2 years of experience with Unit and automation testing
  • 2 years of experience Analyzing large data sets from multiple data sources
Job Responsibility
Job Responsibility
  • Analyze data engineering problems
  • Develop, build and manage large-scale data structures, pipelines and efficient Extract/Load/Transform (ETL) workflows
  • Develop large scale data structures and pipelines to organize, collect and standardize data
  • Write ETL (Extract/Transform/Load) processes
  • Design database systems
  • Develop tools for real-time and offline analytic processing
  • Collaborate with Data Science team to transform data and integrate algorithms and models into automated processes
  • Test and maintain systems and troubleshoot malfunctions
  • Build data marts and data models to support Data Science and other internal customers
  • Integrate data from a variety of sources and ensure adherence to data quality and accessibility standards
What we offer
What we offer
  • Medical benefits
  • Dental benefits
  • Vision benefits
  • 401(k) retirement savings plan
  • Employee Stock Purchase Plan
  • Term life insurance plan
  • Short-term disability benefits
  • Long term disability benefits
  • Well-being programs
  • Education assistance
  • Fulltime
Read More
Arrow Right

Data Engineer

The Data Engineer is accountable for developing high quality data products to su...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5-8 years of relevant experience
  • Experience with 'big data' platforms such as Hadoop, Hive or Snowflake for data storage and processing
  • Good exposure to data modeling techniques
  • Design, optimization and maintenance of data models and data structures
  • Proficient in one or more programming languages commonly used in data engineering such as Python, PySpark
  • Understanding of Data Warehousing concepts
  • Demonstrated problem-solving and decision-making skills
  • Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements
  • Bachelor's degree/University degree or equivalent experience
Job Responsibility
Job Responsibility
  • Developing high quality data products to support the Bank's regulatory requirements and data driven decision making
  • Serving as an example to other team members
  • Working closely with customers
  • Removing or escalating roadblocks
  • Contributing to business outcomes on an agile team
What we offer
What we offer
  • Resources to meet unique needs
  • Empowerment to make healthy decisions
  • Support for managing financial well-being
  • Help planning for future
  • Fulltime
Read More
Arrow Right

Data Engineer

The Data Engineer is accountable for developing high quality data products to su...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5-8 years of relevant experience
  • Experience with 'big data' platforms such as Hadoop, Hive or Snowflake for data storage and processing
  • Good exposure to data modeling techniques
  • Design, optimization and maintenance of data models and data structures
  • Proficient in one or more programming languages commonly used in data engineering such as Python, PySpark
  • Understanding of Data Warehousing concepts
  • Demonstrated problem-solving and decision-making skills
  • Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements
  • Bachelor's degree/University degree or equivalent experience
Job Responsibility
Job Responsibility
  • Developing high quality data products to support the Bank's regulatory requirements and data driven decision making
  • Serving as an example to other team members
  • Working closely with customers
  • Removing or escalating roadblocks
  • Contributing to business outcomes on an agile team
What we offer
What we offer
  • Resources to meet unique needs
  • Empowerment to make healthy decisions
  • Support for managing financial well-being
  • Help planning for future
  • Fulltime
Read More
Arrow Right