CrawlJobs Logo

Command and Data Handling Engineer

neworbit.space Logo

NewOrbit Space

Location Icon

Location:
United Kingdom , Reading (London)

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

At NewOrbit Space, our mission is to engineer the lowest orbiting satellites on Earth to rapidly advance global connectivity and insight. We are currently building satellites that can operate at an altitude of just 200 km - one-third that of conventional satellites. Thanks to our propulsion system AURA, we can compensate for the atmospheric drag at ultra-low altitudes. Your role: You’ll architect and implement the flight-software stack and push code from bench tests to on-orbit updates. You’ll ensure each satellite runs autonomously and safely, turning in-orbit data into actionable ground insights so every subsystem meets its mission marks. You'll have a huge influence on the direction of the software and system development of our satellite.

Job Responsibility:

  • Own the on-board flight software stack — design, implement, and test real-time Rust/C/C++ on RTOS or Embedded Linux, from BSP/bring-up to application logic
  • Build subsystem software interfaces — define and implement ICDs/APIs and drivers/middleware for subsystems.
  • integrate over common buses (CAN, UART, SPI, I2C, SpaceWire as applicable)
  • handle timing, concurrency, and fault containment at boundaries
  • Ship code from review to orbit — push through CI/CD, support launch, and deliver over-the-air updates during operations
  • Build prototypes, simulations, and telemetry analysis tooling — develop SIL/HIL rigs and mission sims
  • instrument systems
  • analyze telemetry to validate designs and quantify CPU/memory/bandwidth/power constraints
  • Build autonomous FDIR logic — detect, isolate and recover from SEUs, sensor drop-outs and thermal excursions without ground intervention
  • Design the command & telemetry pipeline — implement end-to-end commanding, telemetry, and event logging, transforming CCSDS frames in orbit into MQTT topics and cloud dashboards on the ground

Requirements:

  • Proven experience of writing software for previously flown spacecrafts
  • Experience building production embedded or real-time systems in C/C++ or Rust on RTOS or Embedded Linux
  • Strong understanding of standardized space communication protocols such as CCSDS and on-board buses such as CAN, I2C, UART, SPI and SpaceWire
  • Familiarity with ECSS and NASA flight software development tools
  • Experience taking code through unit, integration & HIL tests and shipping via CI/CD (exposure to MISRA-C or ECSS a plus)
What we offer:
  • Equity and Competitive Salary
  • Comprehensive Benefits Package – Including private health insurance with dental and optical coverage, annual healthcare check-ups etc.
  • Hybrid Work - A hybrid setup with one dedicated remote day per week
  • We provide a relocation package and sponsor your visa if you’re joining us from abroad

Additional Information:

Job Posted:
February 21, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Command and Data Handling Engineer

Senior Data Engineer

We are looking for a Data Engineer to join our team and support with designing, ...
Location
Location
Salary
Salary:
Not provided
foundever.com Logo
Foundever
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Minimum of 7 years plus experience in data engineering
  • Track record of deploying and maintaining complex data systems at an enterprise level within regulated environments
  • Expertise in implementing robust data security measures, access controls, and monitoring systems
  • Proficiency in data modeling and database management
  • Strong programming skills in Python and SQL
  • Knowledge of big data technologies like Hadoop, Spark, and NoSQL databases
  • Deep experience with ETL processes and data pipeline development
  • Strong understanding of data warehousing concepts and best practices
  • Experience with cloud platforms such as AWS and Azure
  • Excellent problem-solving skills and attention to detail
Job Responsibility
Job Responsibility
  • Design and optimize complex data storage solutions, including data warehouses and data lakes
  • Develop, automate, and maintain data pipelines for efficient and scalable ETL processes
  • Ensure data quality and integrity through data validation, cleansing, and error handling
  • Collaborate with data analysts, machine learning engineers, and software engineers to deliver relevant datasets or data APIs for downstream applications
  • Implement data security measures and access controls to protect sensitive information
  • Monitor data infrastructure for performance and reliability, addressing issues promptly
  • Stay abreast of industry trends and emerging technologies in data engineering
  • Document data pipelines, processes, and best practices for knowledge sharing
  • Lead data governance and compliance efforts to meet regulatory requirements
  • Collaborate with cross-functional teams to drive data-driven decision-making within the organization
What we offer
What we offer
  • Impactful work
  • Professional growth
  • Competitive compensation
  • Collaborative environment
  • Attractive salary and benefits package
  • Continuous learning and development opportunities
  • A supportive team culture with opportunities for occasional travel for training and industry events
Read More
Arrow Right

Backend Software Engineer - Reference Data Services

The role is for an experienced Software Engineer on the FACT Team at Clear Stree...
Location
Location
United States , New York
Salary
Salary:
200000.00 - 250000.00 USD / Year
clearstreet.io Logo
Clear Street
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least eight (8) years of professional experience implementing highly scalable services (we implement our code in Golang)
  • Confidence in designing and building flexible APIs which enable a microservice architecture to reliably deliver consistent data
  • Contributed to systems that deliver solutions to complex business problems that handle massive amounts of data
  • Drawn towards scale, distributed systems, and associated technologies
  • Strong command over object-oriented design patterns, data structures, and algorithms
  • Communicate technical ideas with ease and always look to collaborate to deliver high quality products
  • Experience will help you mentor team members, define our engineering standards, and drive a system design approach to building new services
Job Responsibility
Job Responsibility
  • Work with a team of passionate and highly collaborative engineers to build out our core Platform
  • Own the design and implementation of new features and services
  • Turn the complexity of processing financial transactions across various asset classes into highly scalable services
  • Tackle non trivial problems that will challenge you to flex your system design muscles, balance trade offs, and implement clean efficient code
  • As a voice of experience in the team, you will help mentor teammates, evolve our technical standards and best practices, and further our culture of system designs
What we offer
What we offer
  • Competitive compensation packages
  • Company equity
  • 401k matching
  • Gender neutral parental leave
  • Full medical, dental and vision insurance
  • Lunch stipends
  • Fully stocked kitchens
  • Happy hours
  • A great location
  • Amazing views
  • Fulltime
Read More
Arrow Right
New

Pyspark Data Engineer

We are seeking a highly motivated and intuitive Python Developer to join our dyn...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4-7 years of relevant experience in the Financial Service industry
  • Strong Proficiency in Python: Excellent command of Python programming, including object-oriented principles, data structures, and algorithms
  • PySpark Experience: Demonstrated experience with PySpark for big data processing and analysis
  • Database Expertise: Proven experience working with relational databases, specifically Oracle, and connecting applications using JDBC
  • SQL Mastery: Advanced SQL querying skills for complex data extraction, manipulation, and optimization
  • Big Data Handling: Experience in working with and processing large datasets efficiently
  • Data Streaming: Familiarity with data streaming concepts and technologies (e.g., Kafka, Spark Streaming) for processing continuous data flows
  • Data Analysis Libraries: Proficient in using data analysis libraries such as Pandas for data manipulation and exploration
  • Software Engineering Principles: Solid understanding of software engineering best practices, including version control (Git), testing, and code review
  • Problem-Solving: Intuitive problem-solver with a self-starter mindset and the ability to work independently and as part of a team
Job Responsibility
Job Responsibility
  • Develop, test, and deploy high-quality Python code for data migration, data profiling, and data processing
  • Design and implement scalable solutions for working with large and complex datasets, ensuring data integrity and performance
  • Utilize PySpark for distributed data processing and analytics on large-scale data platforms
  • Develop and optimize SQL queries for various database systems, including Oracle, to extract, transform, and load data efficiently
  • Integrate Python applications with JDBC-compliant databases (e.g., Oracle) for seamless data interaction
  • Implement data streaming solutions to process real-time or near real-time data efficiently
  • Perform in-depth data analysis using Python libraries, especially Pandas, to understand data characteristics, identify anomalies, and support profiling efforts
  • Collaborate with data architects, data engineers, and business stakeholders to understand requirements and translate them into technical specifications
  • Contribute to the design and architecture of data solutions, ensuring best practices in data management and engineering
  • Troubleshoot and resolve technical issues related to data pipelines, performance, and data quality
  • Fulltime
Read More
Arrow Right
New

Python Data Engineer

We are seeking a highly motivated and intuitive Python Developer to join our dyn...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4-7 years of relevant experience in the Financial Service industry
  • Strong Proficiency in Python: Excellent command of Python programming, including object-oriented principles, data structures, and algorithms
  • PySpark Experience: Demonstrated experience with PySpark for big data processing and analysis
  • Database Expertise: Proven experience working with relational databases, specifically Oracle, and connecting applications using JDBC
  • SQL Mastery: Advanced SQL querying skills for complex data extraction, manipulation, and optimization
  • Big Data Handling: Experience in working with and processing large datasets efficiently
  • Data Streaming: Familiarity with data streaming concepts and technologies (e.g., Kafka, Spark Streaming) for processing continuous data flows
  • Data Analysis Libraries: Proficient in using data analysis libraries such as Pandas for data manipulation and exploration
  • Software Engineering Principles: Solid understanding of software engineering best practices, including version control (Git), testing, and code review
  • Problem-Solving: Intuitive problem-solver with a self-starter mindset and the ability to work independently and as part of a team
Job Responsibility
Job Responsibility
  • Develop, test, and deploy high-quality Python code for data migration, data profiling, and data processing
  • Design and implement scalable solutions for working with large and complex datasets, ensuring data integrity and performance
  • Utilize PySpark for distributed data processing and analytics on large-scale data platforms
  • Develop and optimize SQL queries for various database systems, including Oracle, to extract, transform, and load data efficiently
  • Integrate Python applications with JDBC-compliant databases (e.g., Oracle) for seamless data interaction
  • Implement data streaming solutions to process real-time or near real-time data efficiently
  • Perform in-depth data analysis using Python libraries, especially Pandas, to understand data characteristics, identify anomalies, and support profiling efforts
  • Collaborate with data architects, data engineers, and business stakeholders to understand requirements and translate them into technical specifications
  • Contribute to the design and architecture of data solutions, ensuring best practices in data management and engineering
  • Troubleshoot and resolve technical issues related to data pipelines, performance, and data quality
  • Fulltime
Read More
Arrow Right
New

Data Engineer

We are seeking a Data Engineer to spearhead the architecture and optimization of...
Location
Location
Kenya , Nairobi
Salary
Salary:
Not provided
talentsafari.io Logo
Talent Safari
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Engineering, Computer Science, Data Science, or a relevant discipline
  • A minimum of 3 years of professional experience in Data Engineering or a similar technical role
  • Expert-level command of SQL and management systems like PostgreSQL or MySQL
  • Hands-on proficiency with pipeline tools such as Luigi, DBT, or Apache Airflow
  • Practical experience with heavy-lifting technologies like Hadoop, Spark, or Kafka
  • Proven skills with cloud data stacks, specifically Google BigQuery, AWS Redshift, or Azure Data Factory
  • Strong programming logic in Java, Scala, or Python for data processing tasks
  • Familiarity with data integration frameworks and API utilization
  • Understanding of security best practices and compliance frameworks
  • Exceptional problem-solving capabilities with a rigorous eye for detail
Job Responsibility
Job Responsibility
  • Architect and sustain scalable ETL workflows, guaranteeing consistency and accuracy across diverse data origins
  • Refine and optimize data models and database structures specifically tailored for reporting and analytics
  • Enforce industry best practices regarding data warehousing and storage methodologies
  • Fine-tune data systems to handle the demands of both real-time streams and batch processing
  • Oversee and manage the cloud data environment, utilizing platforms such as AWS, Azure, or GCP
  • Coordinate with software engineers to embed data solutions directly into our product suite
  • Design robust processes for ingesting both structured and unstructured datasets
  • Script automated quality checks and deploy monitoring instrumentation to instantly detect data anomalies
  • Build APIs and services that ensure seamless data interoperability between systems
  • Continuously monitor pipeline health, troubleshooting bottlenecks to maintain an uninterrupted data flow
  • Fulltime
Read More
Arrow Right

Data Operations Engineer

We're seeking an early-career data professional who can support development and ...
Location
Location
United States
Salary
Salary:
65000.00 - 90000.00 USD / Year
personifyhealth.com Logo
Personify Health
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2-3 years experience in data engineering, analytics engineering, or related technical role
  • AWS Certification (or willingness to obtain within 6-12 months), such as AWS Cloud Practitioner or AWS Developer – Associate
  • Experience handling support tickets or operational data issues strongly preferred (~50% of role)
  • Proficiency in Python (required) and SQL, including writing queries, joins, basic transformations, and troubleshooting
  • Hands-on experience with relational databases (PostgreSQL, Oracle, AWS RDS) and familiarity with basic data warehouse concepts
  • Understanding of ETL/ELT pipelines, data validation, and data quality monitoring
  • Basic knowledge of Linux command line for navigating servers and running scripts
  • Some exposure to cloud environments (AWS, Azure) preferred but not required
  • Familiarity with JIRA and Git/Bitbucket for version control and task management
  • Effective written and verbal communication skills with ability to document findings and processes
Job Responsibility
Job Responsibility
  • Support data pipelines: Assist in maintaining and troubleshooting ETL/ELT data pipelines used for healthcare and TPA claims processing across on-prem and cloud environments
  • Handle operational support: Manage support tickets (~50% of time), responding to user requests, researching data questions, and helping resolve operational data problems efficiently
  • Work with core technologies: Use Python and SQL to support data extraction, transformation, validation, and loading while monitoring pipeline performance and resolving data issues
  • Monitor and troubleshoot: Review logs, investigate failed jobs, and correct data discrepancies while supporting daily process monitoring including production processes and application performance
  • Maintain data quality: Execute routine data quality checks, maintain documentation, and follow up on accuracy concerns to ensure reliable data across systems
  • Support database operations: Work with data management tasks in systems such as PostgreSQL, Oracle, and cloud-based databases while learning healthcare data formats
  • Collaborate cross-functionally: Partner with Data Analysts, Developers, and business users to understand data needs and support ongoing reporting and data operations
  • Continue learning: Participate in team meetings, sprint activities, and knowledge-sharing sessions while working with senior team members to develop data engineering skills
What we offer
What we offer
  • Comprehensive medical and dental coverage through our own health solutions
  • Mental health support and wellness programs designed by experts who get it
  • Flexible work arrangements that fit your life
  • Retirement planning support to help you build real wealth for the future
  • Basic Life and AD&D Insurance plus Short-Term and Long-Term Disability protection
  • Employee savings programs and voluntary benefits like Critical Illness and Hospital Indemnity coverage
  • Professional development opportunities and clear career progression paths
  • Mentorship from industry leaders who want to see you succeed
  • Learning budget to invest in skills that matter to your future
  • Unlimited PTO policy
  • Fulltime
Read More
Arrow Right

Celonis Technical Lead

Sopra Steria, a leading tech company in Europe, is hiring a Celonis Technical Le...
Location
Location
India , Noida
Salary
Salary:
Not provided
https://www.soprasteria.com Logo
Sopra Steria
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Sound knowledge and experience of Process Mining using Celonis
  • Strong experience in programming, preferably Vertica SQL/PQL and Python
  • Experience handling large datasets
  • Working knowledge of data models and data structures
  • Technical expertise with data mining
  • Experience with Time Series Data
  • Ability to codify process into step by step linear commands
  • Experience with data visualization tools such as Power BI and Tableau
  • Professional experience writing performant SQL queries and improving existing code
  • Experience working with relational and non-relational databases
Job Responsibility
Job Responsibility
  • Implementation projects for clients from various industries that process data at various levels of complexity
  • Translate complex functional and technical requirements into data models
  • Setup of process data extractions including table and field mappings
  • Estimating and modeling memory requirements for data processing
  • Prepare and connect to On-premise/Cloud source system, extract and transform customer data, and develop process- and customer-specific studies
  • Solicit requirements for Business Process Mining models, including what data they will utilise and how the organisation will use them when they are built
  • Building accurate, reliable, and informative business process mining models will enable our company to expand even more quickly
  • Build the infrastructure required for optimal extraction, transformation and loading of data from disparate data sources
  • Applying analytics and modelling will enable us to own and actively drive process improvement projects and initiatives within the relevant function
  • Maintaining our familiarity with the Celonis platform will require us to write documentation on its technical procedures and processes
What we offer
What we offer
  • Inclusive and respectful work environment
  • Open to people with disabilities
  • Fulltime
Read More
Arrow Right
New

Python Developer

We are seeking a highly motivated and intuitive Python Developer to join our dyn...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4-7 years of relevant experience in the Financial Service industry
  • Strong Proficiency in Python: Excellent command of Python programming, including object-oriented principles, data structures, and algorithms
  • PySpark Experience: Demonstrated experience with PySpark for big data processing and analysis
  • Database Expertise: Proven experience working with relational databases, specifically Oracle, and connecting applications using JDBC
  • SQL Mastery: Advanced SQL querying skills for complex data extraction, manipulation, and optimization
  • Big Data Handling: Experience in working with and processing large datasets efficiently
  • Data Streaming: Familiarity with data streaming concepts and technologies (e.g., Kafka, Spark Streaming) for processing continuous data flows
  • Data Analysis Libraries: Proficient in using data analysis libraries such as Pandas for data manipulation and exploration
  • Software Engineering Principles: Solid understanding of software engineering best practices, including version control (Git), testing, and code review
  • Problem-Solving: Intuitive problem-solver with a self-starter mindset and the ability to work independently and as part of a team
Job Responsibility
Job Responsibility
  • Develop, test, and deploy high-quality Python code for data migration, data profiling, and data processing
  • Design and implement scalable solutions for working with large and complex datasets, ensuring data integrity and performance
  • Utilize PySpark for distributed data processing and analytics on large-scale data platforms
  • Develop and optimize SQL queries for various database systems, including Oracle, to extract, transform, and load data efficiently
  • Integrate Python applications with JDBC-compliant databases (e.g., Oracle) for seamless data interaction
  • Implement data streaming solutions to process real-time or near real-time data efficiently
  • Perform in-depth data analysis using Python libraries, especially Pandas, to understand data characteristics, identify anomalies, and support profiling efforts
  • Collaborate with data architects, data engineers, and business stakeholders to understand requirements and translate them into technical specifications
  • Contribute to the design and architecture of data solutions, ensuring best practices in data management and engineering
  • Troubleshoot and resolve technical issues related to data pipelines, performance, and data quality
  • Fulltime
Read More
Arrow Right