CrawlJobs Logo

Batch Processing System Architect

https://www.hpe.com/ Logo

Hewlett Packard Enterprise

Location Icon

Location:
United States , Spring

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

92700.00 - 213500.00 USD / Year

Job Description:

Researches, designs, develops, configures, integrates, tests and maintains existing and new business applications and/or information systems solutions including databases through integration of technical and business requirements. Applications and infrastructure solutions include both 3rd party software and internally developed applications and infrastructure. Responsibilities include, but are not limited to, analysis of business requirements, coding of modifications or new program, creation of documentation, testing and maintenance of applications, infrastructure, and information systems including database management systems. Works within the Information Technology function, obtaining resources and working in support of objectives and strategies. Provides required documentation and participates in architecture reviews to ensure that the solutions comply with standards and use approved technologies.

Job Responsibility:

  • Manage and configure LSF Scheduler/Resource Manager and RTM monitoring, or similar workload management platforms, to optimize ASIC development workflows
  • Install and update ASIC tools from vendors such as Cadence, Synopsys, and Mentor, as well as RSYNC tools and projects from other HPE ASIC lab environments
  • Develop and maintain Python, Bash, and Perl scripts to manage the LSF environment and streamline system operations
  • Collaborate with the ASIC tools team to update and maintain LSF configurations to support evolving engineering workflows
  • Understand and manage Flexera Licensing and license file configurations to ensure proper operation of licensed EDA tools
  • Occasionally assist ASIC teams in investigating and debugging tool-related issues to ensure optimal productivity
  • Work with business units to coordinate system environment events, such as patching or planned downtime, ensuring minimal disruption to engineering workflows
  • Collaborate with business units to ensure the continuity of business services and compliance with operational standards
  • Manage Linux systems at scale
  • Develop and maintain scripts (Ansible, Bash, Python, Perl) for automated deployments, updates, and system configurations across multiple systems
  • Ensure all deployments and updates are thoroughly tested prior to rollout to minimize impact on production environments
  • Manage NFS servers, user file permissions, and associated storage infrastructure
  • Patch Linux environments using COLO/PC-provided scripts (yum, RPMs, and resolving dependencies)
  • Administer Linux user accounts and authentication through LDAP and local service accounts
  • Install and manage certificates for secure system operations
  • Configure and support secure remote access tools, such as NoMachine (NX) and graphical desktop environments (GNOME/KDE/MATE)
  • Familiarity with Linux hardening tools, such as SaltStack or VMware Aria, and ability to work with hardened environments
  • Measure and optimize system performance, including network, storage, and overall resource utilization
  • Create and maintain comprehensive documentation, including operational runbooks and automated processes
  • Collaborate with architecture teams to design best-of-breed hardware solutions for the ASIC environment
  • Use ServiceNow to submit, track, and manage incidents, provisioning requests, and decommissioning tickets
  • Create and manage ServiceNow change requests (CHG) to implement projects such as quarterly patching and infrastructure updates
  • Work with infrastructure and site networking teams to resolve DNS, routing, and firewall issues to ensure seamless system operation
  • Work with infrastructure teams and management to escalate and resolve any environment-related issues impacting performance

Requirements:

  • Hands-on experience with LSF Scheduler/Resource Manager and RTM monitoring, or similar platforms
  • Familiarity with ASIC tools such as Cadence, Synopsys, and Mentor
  • Strong scripting skills (Python, Bash, Perl) to manage EDA and workload management environments
  • Understanding of Flexera Licensing and license file management
  • 10-15 years of experience managing RHEL systems (certifications preferred)
  • Proficiency in automation tools such as Ansible
  • Experience managing NFS servers, user file permissions, and Linux system patching
  • Familiarity with SaltStack, VMware Aria, or similar hardening tools
  • Strong troubleshooting skills related to networks, storage, and Linux performance
  • Experience with incident/change management platforms like ServiceNow
  • Ability to coordinate with architecture, networking, and infrastructure teams to resolve system-level issues
  • Familiarity with provisioning and decommissioning processes for Linux systems
  • Strong communication and collaboration skills to work with cross-functional teams
  • Strong documentation skills for runbooks, operational workflows, and automated processes
  • Ability to handle multiple priorities in a fast-paced environment

Nice to have:

  • RHEL certifications or equivalent Linux certifications
  • Proven experience managing environments with large-scale deployments of Linux systems and EDA tools
  • Experience with performance tuning for ASIC workflows, storage, and networking
  • Experience with NoMachine (NX) or similar remote desktop applications
What we offer:
  • Health & Wellbeing
  • Personal & Professional Development
  • Unconditional Inclusion

Additional Information:

Job Posted:
March 02, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Batch Processing System Architect

Technical Architect - Data

We are looking for an experienced Technical Architect (Data) to lead the archite...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
evoluteiq.com Logo
EvoluteIQ
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 15+ years in software engineering with 4+ years in data architecture roles
  • Hands-on experience with distributed systems and data processing frameworks (Spark, Flink, Kafka, Airflow, Dagster)
  • Strong Python expertise with libraries such as Pandas, PySpark, NumPy, Dask, FastAPI, and SQLAlchemy
  • Experience with Java/Scala/Python, REST/gRPC APIs, microservices, and event-driven architectures
  • Proficient with data storage systems (PostgreSQL, MongoDB, Cassandra) and cloud data warehouses (Snowflake, BigQuery, Redshift)
  • Familiar with Kubernetes, Docker, Terraform, and CI/CD pipelines for data workloads
Job Responsibility
Job Responsibility
  • Define and evolve data platform architecture for real-time and batch data processing
  • Design scalable, multi-tenant, and cloud-native systems supporting data pipelines and orchestration
  • Guide technology choices across storage, compute, APIs, and orchestration frameworks
  • Ensure production-grade performance, observability, and resilience
  • Partner with product and engineering teams to translate business requirements into scalable solutions
  • Lead architecture reviews, enforce coding and data quality standards, and mentor engineering teams
What we offer
What we offer
  • Opportunity to shape the strategy of a next-gen hyper-automation platform
  • Work with a cross-disciplinary team in a fast-growing, innovation-driven environment
  • Competitive compensation and growth opportunities
  • A culture of innovation, ownership, and continuous learning
Read More
Arrow Right

Salesforce Architect - GTM Engineering

Coralogix is seeking a hands-on Salesforce Architect to serve as a foundational ...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
coralogix.com Logo
Coralogix
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related technical field
  • 8–12+ years of hands-on Salesforce experience across Sales Cloud and complex revenue architectures
  • Experience integrating Salesforce with external customer engagement platforms (e.g., Intercom or similar) is strongly preferred
  • Extensive experience in leading large-scale Salesforce implementations as a Solution Architect
  • Expertise in designing and delivering solutions with intricate integrations, advanced automation, and high-performing workflows
  • Exceptional communication and storytelling skills
  • Salesforce Architect certifications preferred (Application Architect / System Architect / CTA a plus)
  • Experience with Salesforce Data Cloud and complex revenue systems preferred
  • Strong hands-on development experience in Apex, LWC, SOQL/SOSL, asynchronous processing (Batch, Queueable, Future), and platform events
  • Deep understanding of Salesforce order of execution, governor limits, and bulkification best practices
Job Responsibility
Job Responsibility
  • Own the end-to-end GTM platform architecture across CRM, CPQ, marketing automation, billing, and integrations
  • Define integration patterns, API standards, and middleware strategy
  • Establish data architecture and canonical data models across revenue systems
  • Define CI/CD framework and structured release governance
  • Design scalable environment strategy (sandbox, staging, production)
  • Define master data ownership and ensure revenue data consistency across systems
  • Establish security, governance, and automation guardrails to ensure scalable and compliant platform evolution
  • Evaluate and approve integration of AI-driven capabilities across revenue systems
  • Establish monitoring, observability, and reliability standards
  • Maintain a multi-year GTM architecture roadmap aligned with revenue growth
  • Fulltime
Read More
Arrow Right

Senior Incident Operations and Optimization Specialist

The Senior Incident Operations & Optimization Specialist for Mainframe & Batch i...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Information Technology, Computer Engineering, or a related technical field
  • A minimum of 8+ years of hands-on experience in mainframe operations, batch processing, or enterprise workload automation
  • Proven track record in event management, alert tuning, and incident reduction within complex mainframe and batch environments, with quantifiable results
  • Direct, hands-on experience with modern AIOps and event management platforms is required
  • Deep understanding of mainframe architecture, operating systems, and subsystems
  • Expertise in enterprise workload automation, including job design, scheduling, and dependency management
  • Hands-on experience developing robust automation solutions using relevant scripting languages and modern automation frameworks
  • Proficiency in log analysis, pattern recognition, and using query languages for data analysis on log aggregation platforms
  • Excellent analytical abilities with a systematic approach to troubleshooting complex batch dependencies and failure propagation scenarios
  • Exceptional communication skills with the ability to bridge mainframe/legacy and modern technology teams, influence collaboration, and present technical concepts to diverse audiences
Job Responsibility
Job Responsibility
  • Conduct in-depth analysis of mainframe and batch processing alerts to identify chronic issues, reduce operational noise, and develop strategies to address high-volume incident generators, including recurring job failures
  • Design and implement domain-specific correlation, de-duplication, and suppression rules on AIOps and event management platforms
  • Develop logic that understands mainframe subsystem relationships and cascading batch job dependencies
  • Architect and develop automation playbooks for incident data enrichment, automated job restarts, and self-healing capabilities for common mainframe and batch processing failures
  • Assess monitoring gaps in mainframe and batch environments, proposing enhancements to ensure critical business processes have appropriate alerting coverage and align with enterprise standards
  • Partner closely with mainframe operations, batch scheduling, and application development teams to validate correlation logic, define automation initiatives, and provide expert guidance on modern event management practices
  • Continuously validate the effectiveness of implemented rules and automation
  • Establish feedback loops with operational teams to conduct post-implementation reviews and iterative improvements
  • Fulltime
Read More
Arrow Right

Solution architect

This Associate is an experienced bid professional developing, authoring, and cos...
Location
Location
United Kingdom , Guisborough
Salary
Salary:
Not provided
bidsolutions.com Logo
Bid Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • experienced bid professional
  • developing, authoring, and costing applications solutions
  • strong track record of producing winning deliverable propositions
  • core bid team member and lead applications Solution Manager on successful must-win UK and global bids
  • success rate greater than one win per year over the last six years
  • responsible for 50% of marks on highest scoring written answers
  • delivered compelling commercial models
  • designed key themes for communicating complex global proposition
  • delivered first UK ISA S88-compliant batch process control system
  • experienced IT professional with over 30 years' industry knowledge
Read More
Arrow Right

Principal Data Engineer

Atlassian is looking for a Principal Data Engineer to join our Corporate Data En...
Location
Location
United States , San Francisco; Mountain View
Salary
Salary:
168700.00 - 271100.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 12+ years of experience in a Data Engineer role as an individual contributor
  • at least 2 years of experience as a tech lead for a Data Engineering team
  • track record of driving and delivering large and complex efforts
  • great communicator and maintain cross-team and cross-functional relationships
  • experience with building streaming pipelines with a micro-services architecture for low-latency analytics
  • experience working with varied forms of data infrastructure, including relational databases (e.g. SQL), Spark, and column stores (e.g. Redshift)
  • experience building scalable data pipelines using Spark using Airflow scheduler/executor framework or similar scheduling tools
  • experience working in a technical environment with the latest technologies like AWS data services (Redshift, Athena, EMR) or similar Apache projects (Spark, Flink, Hive, or Kafka)
  • understanding of Data Engineering tools/frameworks and standards to improve the productivity and quality of output for Data Engineers across the team
  • industry experience working with large-scale, high-performance data processing systems (batch and streaming) with a "Streaming First" mindset
Job Responsibility
Job Responsibility
  • help our stakeholder teams ingest data faster into our data lake
  • find ways to make our data pipelines more efficient
  • come up ideas to help instigate self-serve data engineering within the company
  • building micro-services, architecting, designing, and enabling self serve capabilities at scale to help Atlassian grow
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right

Senior Principal Data Engineer

Atlassian is looking for a Senior Principal Data Engineer to join the Go-To Mark...
Location
Location
United States , San Francisco
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 18+ years of experience in a Data Engineer role as an individual contributor
  • at least 7 years of experience as a tech lead for Data Engineering teams, and delivered complex, cross-team initiatives
  • built durable relationships with executives/senior leaders across Sales, Marketing, Finance, Commerce and related organizations, and understand complexities of data in these organizations
  • a track record of driving and delivering large complex, multi-team efforts
  • a great communicator and maintain many of the essential cross-team and cross-functional relationships necessary for the team's success
  • experience with building streaming pipelines with a micro-services architecture for low-latency analytics
  • experience working with varied forms of data infrastructure, including relational databases (e.g. SQL), Spark, dbt, and column stores (e.g. Redshift)
  • experience building scalable data pipelines using Spark using Airflow scheduler/executor framework or similar scheduling tools
  • experience working in a technical environment with the latest technologies like AWS data services (Redshift, Athena, EMR) or similar Apache projects (Spark, Flink, Hive, or Kafka)
  • understanding of Data Engineering tools/frameworks and standards to improve the productivity and quality of output for Data Engineers across the team
Job Responsibility
Job Responsibility
  • Help stakeholder teams ingest data faster into our data lake
  • find ways to make data pipelines more efficient
  • come up with ideas to help instigate self-serve data engineering within the company
  • building micro-services, architecting, designing, and enabling self-serve capabilities at scale to help Atlassian grow
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right

Principal Data Engineer

Atlassian is looking for a Principal Data Engineer to join our Data Engineering ...
Location
Location
United States , San Francisco
Salary
Salary:
168700.00 - 271100.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • You have 12+ years of experience in a Data Engineer role as an individual contributor
  • You have at least 2 years of experience as a tech lead for a Data Engineering team
  • You are an engineer with a track record of driving and delivering large (multi-person or multi-team) and complex efforts
  • You are a great communicator and maintain many of the essential cross-team and cross-functional relationships necessary for the team's success
  • Experience with building streaming pipelines with a micro-services architecture for low-latency analytics
  • Experience working with varied forms of data infrastructure, including relational databases (e.g. SQL), Spark, and column stores (e.g. Redshift)
  • Experience building scalable data pipelines using Spark using Airflow scheduler/executor framework or similar scheduling tools
  • Experience working in a technical environment with the latest technologies like AWS data services (Redshift, Athena, EMR) or similar Apache projects (Spark, Flink, Hive, or Kafka)
  • Understanding of Data Engineering tools/frameworks and standards to improve the productivity and quality of output for Data Engineers across the team
  • Industry experience working with large-scale, high-performance data processing systems (batch and streaming) with a 'Streaming First' mindset to drive Atlassian's business growth and improve the product experience
Job Responsibility
Job Responsibility
  • Own the technical evolution of the data engineering capabilities and be responsible for ensuring solutions are being delivered incrementally, meeting outcomes, and promptly escalating risks and issues
  • Establish a deep understanding of how things work in data engineering, use this to direct and coordinate the technical aspects of work across data engineering, and systematically improve productivity across the teams
  • Maintain a high bar for operational data quality and proactively address performance, scale, complexity and security considerations
  • Drive complex decisions that can impact the work in data engineering. Set the technical direction and balance customer and business needs with long-term maintainability & scale
  • Understand and define the problem space, and architect solutions. Coordinate a team of engineers towards implementing them, unblocking them along the way if necessary
  • Lead a team of data engineers through mentoring and coaching, work closely with the engineering manager, and provide consistent feedback to help them manage and grow the team
  • Work with close counterparts in other departments as part of a multi-functional team, and build this culture in your team
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right

Principal Data Engineer

Atlassian is looking for a Principal Data Engineer to join our Data Engineering ...
Location
Location
United States , San Francisco; Seattle; Austin
Salary
Salary:
168700.00 - 271100.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 12+ years of experience in a Data Engineer role as an individual contributor
  • At least 2 years of experience as a tech lead for a Data Engineering team
  • Engineer with a track record of driving and delivering large (multi-person or multi-team) and complex efforts
  • Great communicator and maintain many of the essential cross-team and cross-functional relationships necessary for the team's success
  • Experience with building streaming pipelines with a micro-services architecture for low-latency analytics
  • Experience working with varied forms of data infrastructure, including relational databases (e.g. SQL), Spark, and column stores (e.g. Redshift)
  • Experience building scalable data pipelines using Spark using Airflow scheduler/executor framework or similar scheduling tools
  • Experience working in a technical environment with the latest technologies like AWS data services (Redshift, Athena, EMR) or similar Apache projects (Spark, Flink, Hive, or Kafka)
  • Understanding of Data Engineering tools/frameworks and standards to improve the productivity and quality of output for Data Engineers across the team
  • Industry experience working with large-scale, high-performance data processing systems (batch and streaming) with a "Streaming First" mindset to drive Atlassian's business growth and improve the product experience
Job Responsibility
Job Responsibility
  • Own the technical evolution of the data engineering capabilities and be responsible for ensuring solutions are being delivered incrementally, meeting outcomes, and promptly escalating risks and issues
  • Establish a deep understanding of how things work in data engineering, use this to direct and coordinate the technical aspects of work across data engineering, and systematically improve productivity across the teams
  • Maintain a high bar for operational data quality and proactively address performance, scale, complexity and security considerations
  • Drive complex decisions that can impact the work in data engineering
  • Set the technical direction and balance customer and business needs with long-term maintainability & scale
  • Understand and define the problem space, and architect solutions
  • Coordinate a team of engineers towards implementing them, unblocking them along the way if necessary
  • Lead a team of data engineers through mentoring and coaching, work closely with the engineering manager, and provide consistent feedback to help them manage and grow the team
  • Work with close counterparts in other departments as part of a multi-functional team, and build this culture in your team
What we offer
What we offer
  • Health coverage
  • Paid volunteer days
  • Wellness resources
  • Fulltime
Read More
Arrow Right