CrawlJobs Logo

Senior Data and Application Engineer

kairosinc.net Logo

KAIROS Inc

Location Icon

Location:
United States , St. Inigoes

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

150000.00 - 225000.00 USD / Year

Job Description:

The Senior Data and Application Engineer will participate and provide engineering SME support during the continuous development, hardware engineering, software engineering, data engineering, AI engineering efforts aligned to the latest KAIROS Enterprise Platforms and Data (EPAD) technology roadmap and DoD customer base. These efforts include optimization, development, and integration of various customer hardware and software technology stacks as well as the integration of new customer-requested subsystems operating on NIPR, SIPR, and/or JWICS networks. This role focuses on managing, automating, and improving each customer’s technology platform as well as continuously delivering enterprise platform, data, and AI engineering solutions necessary to rapidly develop, deliver, test, and/or scale new enterprise capabilities. Additionally, this role includes continuously developing, delivering, optimizing, and expanding existing data analytics capabilities using automated data pipelines and software applications that enable decision advantage.

Job Responsibility:

  • Data and application engineering using existing enterprise architecture: Provide data engineering, data automation, and automated data mapping capabilities that will be used to power a series of software applications
  • Provide application engineering support necessary to deliver enterprise applications and required data analytics to achieve customer decision advantage
  • Deliver secure, scalable, and modular platform architecture that is optimized for DoD enterprise data automation, AI model deployment, and continuous feature updates across a global network of priority data platforms and/or customer-owned systems
  • Drive process standardization and platform improvements based on data analytics, performance metrics, and industry best practices
  • Technology Leadership: Lead multi-disciplinary hardware, software, AI, and data engineering teams focused on delivering capabilities and features described on the latest KAIROS EPAD technology roadmap
  • Recommend and implement cutting-edge technologies and methodologies to improve KAIROS data automation processes and platform capabilities
  • Process Optimization and Automation: Identify areas for process improvement, focusing on automation powered by optimized software applications and data automation capabilities across all KAIROS technology stack components
  • Continuously innovate new software application data engineering, and AI capabilities focused on delivering a seamless, secure, scalable, and cost-effective suite of KAIROS software and data automation products
  • Cross-Functional Collaboration: Collaborate with engineering, manufacturing, and product teams to ensure successful design and implementation of enterprise platform and data-automation solutions across various applications
  • Work closely with supply chain and operations teams to ensure material availability, cost efficiency, and process sustainability
  • Lead training programs and knowledge sharing initiatives to build internal expertise
  • Quality and Compliance: Ensure compliance with relevant industry standards and cybersecurity regulations
  • Implement rigorous quality assurance and control protocols to ensure the production of reliable and high-quality data, AI, and platform engineering products
  • Project Management: Lead KAIROS EPAD development projects from concept through execution, including project planning, budgeting, scheduling, and reporting
  • Coordinate with vendors and external partners to source materials, machines, and tools necessary for platform development
  • Ensure timely delivery of projects, adhering to both technical specifications and budgetary constraints

Requirements:

  • Expert level experience in DoD NIPR, SIPR, and/or JWICS platform engineering processes
  • Databricks, Foundry, Qlik, Tableau, Python, SQL, PySpark, Databricks, and other data, software, and application development capabilities
  • Excellent project management skills, with the ability to manage cross-functional teams
  • Strong communication and interpersonal skills, capable of leading technical discussions and driving alignment across teams
  • Strong analytical and problem-solving skills, with the ability to diagnose and resolve complex technical issues in a fast-paced environment
  • Strong customer relations, analytics, documentation skills
  • Self-starter, highly motivated, strong work ethic with a commitment to quality
  • Microsoft office suite proficiency, i.e., Word, Excel, PowerPoint
  • Ability to work within a challenging, fast-paced, team-oriented environment
  • Ability to work independently
  • Ability to multi-task and meet competing, deliverable deadlines
  • Detail oriented
  • Excellent interpersonal and customer service skills
  • Excellent verbal and written communication skills to provide clear status and/or communicate issues
  • Ability to adapt to evolving technology
  • Bachelor’s or Master’s degree in mathematics, data engineering, data analytics, data science, artificial intelligence, operations research, software engineering, and/or other technical/engineering/mathematical degree fields
  • Experienced platform engineer familiar with DoD NIPR, SIPR, and JWICS architecture
  • Knowledge of cybersecurity and data management for DoD and Commercial systems
  • 3-5+ years of DoD experience supporting large-scale data engineering, data automation, and application development efforts using Advana and SIPR, SIPR, and/or JWICS environments
  • Proven experience in developing scalable, secure, and reliable data and application products that can support 24/7 continuous platform operations and recurring software and data engineering updates within the DoD networks
  • Proven experience leading multi-disciplinary engineering teams within DoD operating environments
  • Must be a U.S. Citizen
  • Must be able to complete the full background investigation
  • Active Top Secret Security Clearance
What we offer:
  • Medical Coverage
  • Employer Paid Dental, Vision, Basic Life/AD&D, Short-Term/Long-Term Insurance
  • Health Savings Account with Contribution by Employer
  • 401K Plan with Employer Matching
  • Annual Discretionary Bonuses
  • Paid Time Off
  • Eleven (11) Paid Holidays
  • Certification reimbursement program
  • Tuition Reimbursement Program
  • Paid Parental Leave
  • Employee Assistance Program (EAP)
  • Rewards and recognition programs
  • Community outreach events through our KAIROS Kares group

Additional Information:

Job Posted:
February 01, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Data and Application Engineer

Senior Data Engineer

Atlassian is looking for a Senior Data Engineer to join our Data Engineering Tea...
Location
Location
United States , San Francisco
Salary
Salary:
135600.00 - 217800.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BS in Computer Science or equivalent experience with 5+ years as Data Engineer or similar role
  • Programming skills in Python & Java (good to have)
  • Design data models for storage and retrieval to meet product and requirements
  • Build scalable data pipelines using Spark, Airflow, AWS data services (Redshift, Athena, EMR), Apache projects (Spark, Flink, Hive, and Kafka)
  • Familiar with modern software development practices (Agile, TDD, CICD) applied to data engineering
  • Enhance data quality through internal tools/frameworks detecting DQ issues
  • Working knowledge of relational databases and SQL query authoring
Job Responsibility
Job Responsibility
  • Collaborating with partners, you will design data models, acquisition processes, and applications to address needs
  • Lead business growth and enhance product experiences
  • Collaborate with Technology Teams, Global Analytical Teams, and Data Scientists across programs
  • Extracting/cleaning data, understanding generating systems
  • Improve data quality by adding sources, coding rules, and producing metrics as requirements evolve
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

A typical day may involve collaborating with partners, you will design data mode...
Location
Location
Australia , Sydney
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BS in Computer Science or equivalent experience with 5+ years as Data Engineer or similar role
  • Programming skills in Python & Java (good to have)
  • Design data models for storage and retrieval to meet product and requirements
  • Build scalable data pipelines using Spark, Airflow, AWS data services (Redshift, Athena, EMR), Apache projects (Spark, Flink, Hive, and Kafka)
  • Familiar with modern software development practices (Agile, TDD, CICD) applied to data engineering
  • Enhance data quality through internal tools/frameworks detecting DQ issues
  • Working knowledge of relational databases and SQL query authoring
Job Responsibility
Job Responsibility
  • Design data models, acquisition processes, and applications to address needs
  • Lead business growth and enhance product experiences
  • Collaborate with Product, Engineering, Research and Data Scientists across programs
  • Take ownership of problems from end-to-end: extracting/cleaning data, and understanding source systems
  • Improve the quality of data by adding sources, coding rules, and producing metrics
What we offer
What we offer
  • Health coverage
  • Paid volunteer days
  • Wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer role driving Circle K's cloud-first strategy to unlock the ...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree in Computer Engineering, Computer Science or related discipline
  • Master's Degree preferred
  • 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
  • 5+ years of experience with setting up and operating data pipelines using Python or SQL
  • 5+ years of advanced SQL Programming: PL/SQL, T-SQL
  • 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization
  • Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
  • 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
  • 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
  • 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring
Job Responsibility
Job Responsibility
  • Collaborate with business stakeholders and other technical team members to acquire and migrate data sources
  • Determine solutions that are best suited to develop a pipeline for a particular data source
  • Develop data flow pipelines to extract, transform, and load data from various data sources
  • Efficient in ETL/ELT development using Azure cloud services and Snowflake
  • Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines
  • Provide clear documentation for delivered solutions and processes
  • Identify and implement internal process improvements for data management
  • Stay current with and adopt new tools and applications
  • Build cross-platform data strategy to aggregate multiple sources
  • Proactive in stakeholder communication, mentor/guide junior resources
  • Fulltime
Read More
Arrow Right

Big Data Platform Senior Engineer

Lead Java Data Engineer to guide and mentor a talented team of engineers in buil...
Location
Location
Bahrain , Seef, Manama
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Significant hands-on experience developing high-performance Java applications (Java 11+ preferred) with strong foundation in core Java concepts, OOP, and OOAD
  • Proven experience building and maintaining data pipelines using technologies like Kafka, Apache Spark, or Apache Flink
  • Familiarity with event-driven architectures and experience in developing real-time, low-latency applications
  • Deep understanding of distributed systems concepts and experience with MPP platforms such as Trino (Presto) or Snowflake
  • Experience deploying and managing applications on container orchestration platforms like Kubernetes, OpenShift, or ECS
  • Demonstrated ability to lead and mentor engineering teams, communicate complex technical concepts effectively, and collaborate across diverse teams
  • Excellent problem-solving skills and data-driven approach to decision-making
Job Responsibility
Job Responsibility
  • Provide technical leadership and mentorship to a team of data engineers
  • Lead the design and development of highly scalable, low-latency, fault-tolerant data pipelines and platform components
  • Stay abreast of emerging open-source data technologies and evaluate their suitability for integration
  • Continuously identify and implement performance optimizations across the data platform
  • Partner closely with stakeholders across engineering, data science, and business teams to understand requirements
  • Drive the timely and high-quality delivery of data platform projects
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

As a Senior Software Engineer, you will play a key role in designing and buildin...
Location
Location
United States
Salary
Salary:
156000.00 - 195000.00 USD / Year
apollo.io Logo
Apollo.io
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years experience in platform engineering, data engineering or in a data facing role
  • Experience in building data applications
  • Deep knowledge of data eco system with an ability to collaborate cross-functionally
  • Bachelor's degree in a quantitative field (Physical / Computer Science, Engineering or Mathematics / Statistics)
  • Excellent communication skills
  • Self-motivated and self-directed
  • Inquisitive, able to ask questions and dig deeper
  • Organized, diligent, and great attention to detail
  • Acts with the utmost integrity
  • Genuinely curious and open
Job Responsibility
Job Responsibility
  • Architect and build robust, scalable data pipelines (batch and streaming) to support a variety of internal and external use cases
  • Develop and maintain high-performance APIs using FastAPI to expose data services and automate data workflows
  • Design and manage cloud-based data infrastructure, optimizing for cost, performance, and reliability
  • Collaborate closely with software engineers, data scientists, analysts, and product teams to translate requirements into engineering solutions
  • Monitor and ensure the health, quality, and reliability of data flows and platform services
  • Implement observability and alerting for data services and APIs (think logs, metrics, dashboards)
  • Continuously evaluate and integrate new tools and technologies to improve platform capabilities
  • Contribute to architectural discussions, code reviews, and cross-functional projects
  • Document your work, champion best practices, and help level up the team through knowledge sharing
What we offer
What we offer
  • Equity
  • Company bonus or sales commissions/bonuses
  • 401(k) plan
  • At least 10 paid holidays per year
  • Flex PTO
  • Parental leave
  • Employee assistance program and wellbeing benefits
  • Global travel coverage
  • Life/AD&D/STD/LTD insurance
  • FSA/HSA and medical, dental, and vision benefits
  • Fulltime
Read More
Arrow Right

Data engineer senior

Within a dynamic, high-level team, you will contribute to both R&D and client pr...
Location
Location
France , Paris
Salary
Salary:
Not provided
artelys.com Logo
Artelys
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Degree from a top engineering school or a high-level university program
  • At least 3 years of experience in designing and developing data-driven solutions with high business impact, particularly in industrial or large-scale environments
  • Excellent command of Python for both application development and data processing, with strong expertise in libraries such as Pandas, Polars, NumPy, and the broader Python Data ecosystem
  • Experience implementing data processing pipelines using tools like Apache Airflow, Databricks, Dask, or flow orchestrators integrated into production environments
  • Contributed to large-scale projects combining data analysis, workflow orchestration, back-end development (REST APIs and/or Messaging), and industrialisation, within a DevOps/DevSecOps-oriented framework
  • Proficient in using Docker for processing encapsulation and deployment
  • Experience with Kubernetes for orchestrating workloads in cloud-native architectures
  • Motivated by practical applications of data in socially valuable sectors such as energy, mobility, or health, and thrives in environments where autonomy, rigour, curiosity, and teamwork are valued
  • Fluency in English and French is required
Job Responsibility
Job Responsibility
  • Design and develop innovative and high-performance software solutions addressing industrial challenges, primarily using the Python language and a microservices architecture
  • Gather user and business needs to design data collection and storage solutions best suited to the presented use cases
  • Develop technical solutions for data collection, cleaning, and processing, then industrialise and automate them
  • Contribute to setting up technical architectures based on Data or even Big Data environments
  • Carry out development work aimed at industrialising and orchestrating computations (statistical and optimisation models) and participate in software testing and qualification
What we offer
What we offer
  • Up to 2 days of remote work per week possible
  • Flexible working hours
  • Offices located in the city center of each city where we are located
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We're seeking an experienced Senior Data Engineer to help shape the future of he...
Location
Location
Germany , Berlin
Salary
Salary:
Not provided
audibene.de Logo
Audibene GmbH
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of hands on experience with complex ETL processes, data modeling, and large scale data systems
  • Production experience with modern cloud data warehouses (Snowflake, BigQuery, Redshift) on AWS, GCP, or Azure
  • Proficiency in building and optimizing data transformations and pipelines in python
  • Experience with columnar storage, MPP databases, and distributed data processing architectures
  • Ability to translate complex technical concepts for diverse audiences, from engineers to business stakeholders
  • Experience with semantic layers, data catalogs, or metadata management systems
  • Familiarity with modern analytical databases like Snowflake, BigQuery, ClickHouse, DuckDB, or similar systems
  • Experience with streaming technologies like Kafka, Pulsar, Redpanda, or Kinesis
Job Responsibility
Job Responsibility
  • Design and build robust, high performance data pipelines using our modern stack (Airflow, Snowflake, Pulsar, Kubernetes) that feed directly into our semantic layer and data catalog
  • Create data products optimized for consumption by AI agents and LLMs where data quality, context, and semantic richness are crucial
  • Structure and transform data to be inherently machine readable, with rich metadata and clear lineage that powers intelligent applications
  • Take responsibility from raw data ingestion through to semantic modeling, ensuring data is not just accurate but contextually rich and agent ready
  • Champion best practices in building LLM consumable data products, optimize for both human and machine consumers, and help evolve our dbt transformation layer
  • Built data products for AI/LLM consumption, not just analytics dashboards
What we offer
What we offer
  • Work 4 days a week from our office (Berlin/Mainz) with a passionate team, and 1 day a week from home
  • Regularly join on- and offline team events, company off-sites, and the annual audibene Wandertag
  • Cost of the Deutschland-Ticket covered
  • Access to over 50,000 gyms and wellness facilities through Urban Sports Club
  • Support for personal development with a wide range of programs, trainings, and coaching opportunities
  • Dog-friendly office
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

The Data Engineer is responsible for designing, building, and maintaining robust...
Location
Location
Germany , Berlin
Salary
Salary:
Not provided
ibvogt.com Logo
ib vogt GmbH
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Degree in Computer Science, Data Engineering, or related field
  • 5+ years of experience in data engineering or similar roles
  • experience in renewable energy, engineering, or asset-heavy industries is a plus
  • Strong experience with modern data stack (e.g., PowerPlatform, Azure Data Factory, Databricks, Airflow, dbt, Synapse, Snowflake, BigQuery, etc.)
  • Proficiency in Python and SQL for data transformation and automation
  • Experience with APIs, message queues (Kafka, Event Hub), data streaming and knowledge of data lakehouse and data warehouse architectures
  • Familiarity with CI/CD pipelines, DevOps practices, and containerization (Docker, Kubernetes)
  • Understanding of cloud environments (preferably Microsoft Azure, PowerPlatform)
  • Strong analytical mindset and problem-solving attitude paired with a structured, detail-oriented, and documentation-driven work style
  • Team-oriented approach and excellent communication skills in English
Job Responsibility
Job Responsibility
  • Design, implement, and maintain efficient ETL/ELT data pipelines connecting internal systems (M365, Sharepoint, ERP, CRM, SCADA, O&M, etc.) and external data sources
  • Integrate structured and unstructured data from multiple sources into the central data lake / warehouse / Dataverse
  • Build data models and transformation workflows to support analytics, reporting, and AI/ML use cases
  • Implement data quality checks, validation rules, and metadata management according to the company’s data governance framework
  • Automate workflows, optimize performance, and ensure scalability of data pipelines and processing infrastructure
  • Work closely with Data Scientists, Software Engineers, and Domain Experts to deliver reliable datasets for Digital Twin and AI applications
  • Maintain clear documentation of data flows, schemas, and operational processes
What we offer
What we offer
  • Competitive remuneration and motivating benefits
  • Opportunity to shape the data foundation of ib vogt’s digital transformation journey
  • Work on cutting-edge data platforms supporting real-world renewable energy assets
  • A truly international working environment with colleagues from all over the world
  • An open-minded, collaborative, dynamic, and highly motivated team
  • Fulltime
Read More
Arrow Right