CrawlJobs Logo

Operational Technology Data Engineer

perduefarms.com Logo

Perdue Farms

Location Icon

Location:
United States , Salisbury

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

89000.00 - 133000.00 USD / Year

Job Description:

Build the digital backbone of our operations as an Operational Technology Data Engineer. In this role, you will modernize how industrial data is captured, secured, and transformed into actionable insights by supporting site and enterprise historian platforms and building scalable pipelines that move data from the plant floor to the cloud. You will partner with enterprise architects and cross functional teams to convert complex OT data into trusted, governed datasets that power analytics and drive smarter decisions across the business. This opportunity is ideal for a forward-thinking engineer who is passionate about bridging OT and IT, strengthening data foundations, and enabling the next generation of operational performance.

Job Responsibility:

  • Administer, maintain, and optimize OT data historian systems (e.g., AVEVA PI, FactoryTalk Historian, Canary, Ignition)
  • Ensure high quality, continuous capture of time series data from PLCs, SCADA, sensors, and edge systems
  • Troubleshoot data flow issues and improve ingestion patterns such as compression, buffering, and contextualization
  • Design and implement secure, scalable OT data pipelines following OT IT segmentation standards
  • Develop cloud ingestion workflows using historian replication tools, edge gateways, IoT messaging systems, or custom pipelines
  • Ensure reliable, governed, one way movement of OT data to cloud environments
  • Partner with Enterprise Architecture and BI teams to define OT data models, metadata standards, and governance requirements
  • Transform raw OT datasets into curated, production ready assets for analytics, reporting, and machine learning
  • Implement repeatable data onboarding frameworks to support multi-site expansion
  • Apply enterprise naming conventions, metadata standards, and data validation rules
  • Monitor pipeline health, latency, schema changes, and historian system integrity
  • Follow cybersecurity standards for OT—ensuring read only access, one-way pathways, and adherence to network segmentation policies
  • Work closely with business leadership, operations, engineering, maintenance, controls, IT infrastructure, and enterprise data teams
  • Participate in design reviews, architecture discussions, and continuous improvement initiatives
  • Translate plant floor needs into scalable enterprise data solutions

Requirements:

  • Bachelor’s Degree in Engineering, Computer Science, IT, Data Engineering, or related field—or equivalent experience
  • 4+ years of experience with OT systems, industrial data, or data engineering
  • Hands-on experience with data historian platforms (e.g., AVEVA PI, FT Historian, Canary, Ignition)
  • Proficiency in data pipeline tools such as SQL, Python, REST APIs, or cloud ingestion frameworks
  • Familiarity with industrial protocols (EtherNet/IP, Modbus, OPC UA) and SCADA/PLC environments
  • Understanding of cloud data platforms (Azure, AWS, Snowflake, or Databricks) and time-series data modeling
  • Knowledge of OT cybersecurity concepts, segmentation, and iDMZ architectures
  • Knowledge of ISA 95, IEC 62443, or OT risk management frameworks

Nice to have:

  • Experience building or supporting enterprise OT to cloud data architectures
  • Exposure to enterprise data lakes, data governance frameworks, and metadata/catalog platforms
  • Experience with OPC UA, MQTT, and modern IIoT platforms for secure, scalable industrial data integration
  • Hands-on experience developing Power BI dashboards or working with analytics teams to transform OT data into actionable visualizations
What we offer:
  • medical/Rx
  • 401(k) with employer match after 1 year
  • critical illness insurance
  • accident insurance
  • dental
  • vision
  • life insurance
  • optional group life insurance
  • short-term and long-term disability protection
  • flexible spending accounts
  • paid time off
  • annual bonus available (variable depending on performance)

Additional Information:

Job Posted:
April 23, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Operational Technology Data Engineer

Operational Technologies Engineer

The Operational Technologies Engineer will join the Renewables, New Businesses &...
Location
Location
Portugal
Salary
Salary:
Not provided
https://www.galp.com/ Logo
Galp
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Graduation in Electrical Engineering, Telecommunication, Computer Sciences, or another that fits the desired profile
  • At least 3 years working in the Renewables sector
  • Previous experience with OEM SCADAs
  • Strong knowledge of industrial communications protocols (IEC-60870_5_101, IEC-60870_5_104, MODBUS, OPC-UA, DNP, OPC DA & OPC XML) and databases (SQL, others)
  • Valuable experience in OT Cyber Security
  • Solid communication skills with capability to establish technical discussions in different cultural environments
  • Ability to multi-task and work concurrently on multiple projects
  • Strong analytical and problem-solving skills
  • High attention to detail
  • Fluent in Portuguese and English and valuable in Spanish, both verbally and written
Job Responsibility
Job Responsibility
  • Act as a focal point for all on-site OT equipment
  • Provide transversal expertise to Galp internal and external stakeholders
  • Be the main keeper of Galp real-time operational data flow
  • Ensure continuous evolution of OT equipment maximizing the plant's productivity
  • Collaborate in the integration of Galp's assets assuring data integrity for real-time and historical analysis purposes
  • Ensure the continuity of the Control Center (CC) activities in terms of applications, communication, and systems
  • Participate in the specification, designing, development and implementation of main systems' functionalities
  • Monitor work plans and schedules to integrate, perform updates or retrofits of OEM SCADAs and other OT equipment
  • Establish best practices for the integration of new project sites
  • Coordinate execution of validation activities during installation and commissioning process of OEM SCADA and other OT equipment
What we offer
What we offer
  • Competitive salary and bonus
  • Health insurance for you and your family
  • Meal allowance
  • Holidays, 25 days
  • Challenging Projects
  • Fulltime
Read More
Arrow Right

Is Data Center Operations Engineer

Bridging Information Technology (IT) and the Mechanical, Electrical, and Plumbin...
Location
Location
United States , New Albany
Salary
Salary:
91731.00 - 114948.00 USD / Year
amgen.com Logo
Amgen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master’s degree
  • Bachelor’s degree and 2 years of data center operations experience
  • Associate’s degree and 6 years of data center operations experience
  • High school diploma / GED and 8 years of data center operations experience
  • Hands-on experience with rack/stack, structured cabling, and IT hardware installation
  • Familiarity with Dell PowerEdge, Nutanix, NetApp, and Cisco platforms
  • Ability to interpret electrical and mechanical drawings (awareness-level competency)
  • Experience using monitoring, alerting, or automation systems (AI-enabled platforms preferred)
  • Solid understanding of IT operations concepts including hardware lifecycle management and disaster recovery
  • Ability to read and update documentation, diagrams, and cable records
Job Responsibility
Job Responsibility
  • Serve as the liaison between IT teams and facilities staff, ensuring flawless communication
  • Interpret electrical one-line diagrams, distribution drawings, and cooling schematics to support incident response and planning
  • Install, rack, cable, and support enterprise IT systems including Dell PowerEdge, Nutanix, NetApp, and Cisco technologies
  • Support day-to-day moves, adds, and changes (MACs) in building IDF and VDER environments
  • Perform fiber and copper patch cabling in data centers, IDFs, and VDER closets
  • Trace and troubleshoot cabling issues to restore connectivity
  • Monitor infrastructure, proactively detect issues, and bring up with urgency to appropriate teams
  • Apply AI-enabled monitoring and automation platforms to enhance data center operations
  • Maintain documentation of infrastructure layouts, procedures, and operational standards
  • Participate in capacity planning, disaster recovery drills, and continuous improvement initiatives
What we offer
What we offer
  • A comprehensive employee benefits package, including a Retirement and Savings Plan with generous company contributions, group medical, dental and vision coverage, life and disability insurance, and flexible spending accounts
  • A discretionary annual bonus program, or for field sales representatives, a sales-based incentive plan
  • Stock-based long-term incentives
  • Award-winning time-off plans
  • Flexible work models, including remote and hybrid work arrangements, where possible
  • Fulltime
Read More
Arrow Right

Software Engineer - Data Engineering

Akuna Capital is a leading proprietary trading firm specializing in options mark...
Location
Location
United States , Chicago
Salary
Salary:
130000.00 USD / Year
akunacapital.com Logo
AKUNA CAPITAL
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BS/MS/PhD in Computer Science, Engineering, Physics, Math, or equivalent technical field
  • 5+ years of professional experience developing software applications
  • Java/Scala experience required
  • Highly motivated and willing to take ownership of high-impact projects upon arrival
  • Prior hands-on experience with data platforms and technologies such as Delta Lake, Spark, Kubernetes, Kafka, ClickHouse, and/or Presto/Trino
  • Experience building large-scale batch and streaming pipelines with strict SLA and data quality requirements
  • Must possess excellent communication, analytical, and problem-solving skills
  • Recent hands-on experience with AWS Cloud development, deployment and monitoring necessary
  • Demonstrated experience working on an Agile team employing software engineering best practices, such as GitOps and CI/CD, to deliver complex software projects
  • The ability to react quickly and accurately to rapidly changing market conditions, including the ability to quickly and accurately respond and/or solve math and coding problems are essential functions of the role
Job Responsibility
Job Responsibility
  • Work within a growing Data Engineering division supporting the strategic role of data at Akuna
  • Drive the ongoing design and expansion of our data platform across a wide variety of data sources, supporting an array of streaming, operational and research workflows
  • Work closely with Trading, Quant, Technology & Business Operations teams throughout the firm to identify how data is produced and consumed, helping to define and deliver high impact projects
  • Build and deploy batch and streaming pipelines to collect and transform our rapidly growing Big Data set within our hybrid cloud architecture utilizing Kubernetes/EKS, Kafka/MSK and Databricks/Spark
  • Mentor junior engineers in software and data engineering best practices
  • Produce clean, well-tested, and documented code with a clear design to support mission critical applications
  • Build automated data validation test suites that ensure that data is processed and published in accordance with well-defined Service Level Agreements (SLA’s) pertaining to data quality, data availability and data correctness
  • Challenge the status quo and help push our organization forward, as we grow beyond the limits of our current tech stack
What we offer
What we offer
  • Discretionary performance bonus
  • Comprehensive benefits package that may encompass employer-paid medical, dental, vision, retirement contributions, paid time off, and other benefits
  • Fulltime
Read More
Arrow Right

Technology Services Engineer – Data Protection & Disaster Recovery

Immediate need for a Data Protection & Disaster Recovery Technical Services Engi...
Location
Location
United States , Alpharetta, Georgia
Salary
Salary:
Not provided
tier4group.com Logo
Tier4 Group
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2+ years in an MSP setting focused on backup/DR and Windows server environments
  • Deep Veeam proficiency
  • Solid grounding in Windows Server/AD, virtualization (preferably Hyper-V, working knowledge of VMware), storage (SMB/NFS, iSCSI), networking basics, and change control
  • PowerShell and basic API/JSON skills to automate deployments, checks, and reports
  • Security & compliance mindset: RBAC/least privilege, MFA, encryption in transit/at rest, audit artifacts for SOC 2/HIPAA
  • Excellent documentation and incident communications
  • willing to support maintenance windows/on-call
Job Responsibility
Job Responsibility
  • Own backup, restore, and resiliency outcomes for all MSP clients
  • act as the primary technical liaison for backup/DR platforms and service delivery
  • Veeam platform ownership: design, configure, and maintain Veeam Backup & Replication (SOBR, backup copy, replication, Instant Recovery, SureBackup labs)
  • manage repositories, retention, encryption, and job health
  • Immutable off-site copies: build and operate (bucket policies, retention/immutability, lifecycle/usage controls) as the off-site tier
  • Monitoring & compliance reporting: implement and tune end-to-end success/failure monitoring, alerting/escalation, daily health checks, and compliance evidence packs
  • 3-2-1 architectures: design and run three-copy / two-media / one off-site strategies using NAS appliances (QNAP/Synology) for local copy and off-site
  • document RPO/RTO per workload
  • Recovery testing & documentation: execute regular restore drills (file/VM/app-item, Instant Recovery, SureBackup verification), record results, and maintain DR runbooks with clear owners and contact trees
  • Incident response & escalation: lead backup/restore and DR events (containment, comms, status cadence, executive updates), perform RCA, and drive corrective and preventive actions
What we offer
What we offer
  • Competitive salary
  • comprehensive benefits (medical, dental, vision, life, disability, 401(k) match)
  • robust PTO
  • Fulltime
Read More
Arrow Right

Data Engineering & Analytics Lead

Premium Health is seeking a highly skilled, hands-on Data Engineering & Analytic...
Location
Location
United States , Brooklyn
Salary
Salary:
Not provided
premiumhealth.org Logo
Premium Health
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree preferred
  • Proven track record and progressively responsible experience in data engineering, data architecture, or related technical roles
  • healthcare experience preferred
  • Strong knowledge of data engineering principles, data integration, ETL processes, and semantic mapping techniques and best practices
  • Experience implementing data quality management processes, data governance frameworks, cataloging, and master data management concepts
  • Familiarity with healthcare data standards (e.g., HL7, FHIR, etc), health information management principles, and regulatory requirements (e.g., HIPAA)
  • Understanding of healthcare data, including clinical, operational, and financial data models, preferred
  • Advanced proficiency in SQL, data modeling, database design, optimization, and performance tuning
  • Experience designing and integrating data from disparate systems into harmonized data models or semantic layers
  • Hands-on experience with modern cloud-based data platforms (e.g Azure, AWS, GCP)
Job Responsibility
Job Responsibility
  • Collaborate with the CDIO and Director of Technology to define a clear data vision aligned with the organization's goals and execute the enterprise data roadmap
  • Serve as a thought leader for data engineering and analytics, guiding the evolution of our data ecosystem and championing data-driven decision-making across the organization
  • Build and mentor a small data team, providing technical direction and performance feedback, fostering best practices and continuous learning, while remaining a hands-on implementor
  • Define and implement best practices, standards, and processes for data engineering, analytics, and data management across the organization
  • Design, implement, and maintain a scalable, reliable, and high-performing modern data infrastructure, aligned with the organizational needs and industry best practices
  • Architect and maintain data lake/lakehouse, warehouse, and related platform components to support analytics, reporting, and operational use cases
  • Establish and enforce data architecture standards, governance models, naming conventions ,and documentation
  • Develop, optimize, and maintain scalable ETL/ELT pipelines and data workflows to collect, transform, normalize, and integrate data from diverse systems
  • Implement robust data quality processes, validation, monitoring, and error-handling frameworks
  • Ensure data is accurate, timely, secure, and ready for self-service analytics and downstream applications
What we offer
What we offer
  • Paid Time Off, Medical, Dental and Vision plans, Retirement plans
  • Public Service Loan Forgiveness (PSLF)
  • Fulltime
Read More
Arrow Right

Principal Data Engineer

PointClickCare is searching for a Principal Data Engineer who will contribute to...
Location
Location
United States
Salary
Salary:
183200.00 - 203500.00 USD / Year
pointclickcare.com Logo
PointClickCare
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Principal Data Engineer with at least 10 years of professional experience in software or data engineering, including a minimum of 4 years focused on streaming and real-time data systems
  • Proven experience driving technical direction and mentoring engineers while delivering complex, high-scale solutions as a hands-on contributor
  • Deep expertise in streaming and real-time data technologies, including frameworks such as Apache Kafka, Flink, and Spark Streaming
  • Strong understanding of event-driven architectures and distributed systems, with hands-on experience implementing resilient, low-latency pipelines
  • Practical experience with cloud platforms (AWS, Azure, or GCP) and containerized deployments for data workloads
  • Fluency in data quality practices and CI/CD integration, including schema management, automated testing, and validation frameworks (e.g., dbt, Great Expectations)
  • Operational excellence in observability, with experience implementing metrics, logging, tracing, and alerting for data pipelines using modern tools
  • Solid foundation in data governance and performance optimization, ensuring reliability and scalability across batch and streaming environments
  • Experience with Lakehouse architectures and related technologies, including Databricks, Azure ADLS Gen2, and Apache Hudi
  • Strong collaboration and communication skills, with the ability to influence stakeholders and evangelize modern data practices within your team and across the organization
Job Responsibility
Job Responsibility
  • Lead and guide the design and implementation of scalable streaming data pipelines
  • Engineer and optimize real-time data solutions using frameworks like Apache Kafka, Flink, Spark Streaming
  • Collaborate cross-functionally with product, analytics, and AI teams to ensure data is a strategic asset
  • Advance ongoing modernization efforts, deepening adoption of event-driven architectures and cloud-native technologies
  • Drive adoption of best practices in data governance, observability, and performance tuning for streaming workloads
  • Embed data quality in processing pipelines by defining schema contracts, implementing transformation tests and data assertions, enforcing backward-compatible schema evolution, and automating checks for freshness, completeness, and accuracy across batch and streaming paths before production deployment
  • Establish robust observability for data pipelines by implementing metrics, logging, and distributed tracing for streaming jobs, defining SLAs and SLOs for latency and throughput, and integrating alerting and dashboards to enable proactive monitoring and rapid incident response
  • Foster a culture of quality through peer reviews, providing constructive feedback and seeking input on your own work
What we offer
What we offer
  • Benefits starting from Day 1!
  • Retirement Plan Matching
  • Flexible Paid Time Off
  • Wellness Support Programs and Resources
  • Parental & Caregiver Leaves
  • Fertility & Adoption Support
  • Continuous Development Support Program
  • Employee Assistance Program
  • Allyship and Inclusion Communities
  • Employee Recognition … and more!
  • Fulltime
Read More
Arrow Right

Cloud Technical Architect / Data DevOps Engineer

The role involves designing, implementing, and optimizing scalable Big Data and ...
Location
Location
United Kingdom , Bristol
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • An organised and methodical approach
  • Excellent time keeping and task prioritisation skills
  • An ability to provide clear and concise updates
  • An ability to convey technical concepts to all levels of audience
  • Data engineering skills – ETL/ELT
  • Technical implementation skills – application of industry best practices & designs patterns
  • Technical advisory skills – experience in researching technological products / services with the intent to provide advice on system improvements
  • Experience of working in hybrid environments with both classical and DevOps
  • Excellent written & spoken English skills
  • Excellent knowledge of Linux operating system administration and implementation
Job Responsibility
Job Responsibility
  • Detailed development and implementation of scalable clustered Big Data solutions, with a specific focus on automated dynamic scaling, self-healing systems
  • Participating in the full lifecycle of data solution development, from requirements engineering through to continuous optimisation engineering and all the typical activities in between
  • Providing technical thought-leadership and advisory on technologies and processes at the core of the data domain, as well as data domain adjacent technologies
  • Engaging and collaborating with both internal and external teams and be a confident participant as well as a leader
  • Assisting with solution improvement activities driven either by the project or service
  • Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation
  • Cloud Engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code
  • Provide technical challenge and assurance throughout development and delivery of work
  • Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership
  • Work independently and/or within a team using a DevOps way of working
What we offer
What we offer
  • Extensive social benefits
  • Flexible working hours
  • Competitive salary
  • Shared values
  • Equal opportunities
  • Work-life balance
  • Evolving career opportunities
  • Comprehensive suite of benefits that supports physical, financial and emotional wellbeing
  • Fulltime
Read More
Arrow Right

Cybersecurity Systems & Data Engineer

You will play a pivotal role in maintaining and implementing data architecture i...
Location
Location
United States , West Conshohocken
Salary
Salary:
Not provided
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proficiency in Cisco Technologies
  • Familiarity with Citrix Technologies
  • Demonstrated experience with Cloud Technologies
  • Knowledge of DELL EMC Technologies
  • Expertise in Dell Technologies
  • Experience in AB Testing
  • Strong understanding of Active Directory
  • Proficiency in Automation
  • Experience with AWS Technologies
  • Demonstrated ability in Backup Technologies
Job Responsibility
Job Responsibility
  • Implement robust data encryption and access controls across critical data platforms
  • Analyze vendor services and data requirements
  • Assist in developing secure capabilities for data delivery and management
  • Participate in incident response and troubleshoot complex issues
  • Identify opportunities to enhance network segmentation and protection strategies
  • Perform complex data analysis and suggest new network flows and architectures
  • Support the development of reporting and communication methods
  • Stay updated on trends and development opportunities within security regulatory, technology, and operational requirements
  • Implement platform and service configuration changes to meet information security requirements
  • Provide Tier III capabilities as needed to support Operations and GRC teams
What we offer
What we offer
  • Medical, vision, dental, and life and disability insurance
  • Eligibility to enroll in company 401(k) plan
  • Fulltime
Read More
Arrow Right