CrawlJobs Logo

Command and Data Handling Engineer

newspacetechnical.com Logo

NewSpace Technical

Location Icon

Location:
United Kingdom , Reading

Category Icon

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

Not provided

Job Description:

A high-impact opportunity to join a fast-scaling space start-up building ultra-low Earth orbit satellites, enabled by an innovative propulsion system designed to overcome atmospheric drag. You’ll take ownership of Command & Data Handling (C&DH) – the spacecraft’s central nervous system – bridging on-board computing, avionics, comms, and flight software to deliver reliable telemetry, commanding, and autonomy in a harsh on-orbit environment.

Job Responsibility:

  • Own the end-to-end C&DH architecture (commanding, telemetry, data routing, on-board compute)
  • Define and manage spacecraft interfaces (ICDs) across avionics, payload, comms, and power subsystems
  • Design fault-tolerant command sequencing, mode management, and safe state behaviours
  • Lead integration of on-board data buses and protocols (e.g. CAN, I2C, SPI, UART, SpaceWire)
  • Support test & verification through SIL/HIL, functional testing, and operational readiness
  • Work closely with flight software, AIT, and systems engineering to deliver flight-ready capability

Requirements:

  • Strong experience in spacecraft avionics / C&DH / embedded flight systems (or equivalent safety-critical systems)
  • Confidence owning system interfaces and managing integration across multiple subsystems
  • Solid understanding of spacecraft telemetry/telecommand concepts and operational workflows
  • Experience with embedded protocols/buses (CAN, I2C, SPI, UART, etc.)
  • Engineering discipline around verification, documentation, and configuration control
  • Right to work in the UK

Nice to have:

  • Experience with spacecraft OBCs, radios, EPS, payload interfaces
  • Knowledge of autonomy/FDIR concepts and designing for failure
  • Familiarity with AIT workflows, EGSE, acceptance testing and commissioning
  • Prior work in NewSpace or fast-iteration engineering environments
What we offer:
  • Equity
  • Benefits

Additional Information:

Job Posted:
January 22, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Command and Data Handling Engineer

Senior Data Engineer

We are looking for a Data Engineer to join our team and support with designing, ...
Location
Location
Salary
Salary:
Not provided
foundever.com Logo
Foundever
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Minimum of 7 years plus experience in data engineering
  • Track record of deploying and maintaining complex data systems at an enterprise level within regulated environments
  • Expertise in implementing robust data security measures, access controls, and monitoring systems
  • Proficiency in data modeling and database management
  • Strong programming skills in Python and SQL
  • Knowledge of big data technologies like Hadoop, Spark, and NoSQL databases
  • Deep experience with ETL processes and data pipeline development
  • Strong understanding of data warehousing concepts and best practices
  • Experience with cloud platforms such as AWS and Azure
  • Excellent problem-solving skills and attention to detail
Job Responsibility
Job Responsibility
  • Design and optimize complex data storage solutions, including data warehouses and data lakes
  • Develop, automate, and maintain data pipelines for efficient and scalable ETL processes
  • Ensure data quality and integrity through data validation, cleansing, and error handling
  • Collaborate with data analysts, machine learning engineers, and software engineers to deliver relevant datasets or data APIs for downstream applications
  • Implement data security measures and access controls to protect sensitive information
  • Monitor data infrastructure for performance and reliability, addressing issues promptly
  • Stay abreast of industry trends and emerging technologies in data engineering
  • Document data pipelines, processes, and best practices for knowledge sharing
  • Lead data governance and compliance efforts to meet regulatory requirements
  • Collaborate with cross-functional teams to drive data-driven decision-making within the organization
What we offer
What we offer
  • Impactful work
  • Professional growth
  • Competitive compensation
  • Collaborative environment
  • Attractive salary and benefits package
  • Continuous learning and development opportunities
  • A supportive team culture with opportunities for occasional travel for training and industry events
Read More
Arrow Right

Backend Software Engineer - Reference Data Services

The role is for an experienced Software Engineer on the FACT Team at Clear Stree...
Location
Location
United States , New York
Salary
Salary:
200000.00 - 250000.00 USD / Year
clearstreet.io Logo
Clear Street
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least eight (8) years of professional experience implementing highly scalable services (we implement our code in Golang)
  • Confidence in designing and building flexible APIs which enable a microservice architecture to reliably deliver consistent data
  • Contributed to systems that deliver solutions to complex business problems that handle massive amounts of data
  • Drawn towards scale, distributed systems, and associated technologies
  • Strong command over object-oriented design patterns, data structures, and algorithms
  • Communicate technical ideas with ease and always look to collaborate to deliver high quality products
  • Experience will help you mentor team members, define our engineering standards, and drive a system design approach to building new services
Job Responsibility
Job Responsibility
  • Work with a team of passionate and highly collaborative engineers to build out our core Platform
  • Own the design and implementation of new features and services
  • Turn the complexity of processing financial transactions across various asset classes into highly scalable services
  • Tackle non trivial problems that will challenge you to flex your system design muscles, balance trade offs, and implement clean efficient code
  • As a voice of experience in the team, you will help mentor teammates, evolve our technical standards and best practices, and further our culture of system designs
What we offer
What we offer
  • Competitive compensation packages
  • Company equity
  • 401k matching
  • Gender neutral parental leave
  • Full medical, dental and vision insurance
  • Lunch stipends
  • Fully stocked kitchens
  • Happy hours
  • A great location
  • Amazing views
  • Fulltime
Read More
Arrow Right

Pyspark Data Engineer

We are seeking a highly motivated and intuitive Python Developer to join our dyn...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4-7 years of relevant experience in the Financial Service industry
  • Strong Proficiency in Python: Excellent command of Python programming, including object-oriented principles, data structures, and algorithms
  • PySpark Experience: Demonstrated experience with PySpark for big data processing and analysis
  • Database Expertise: Proven experience working with relational databases, specifically Oracle, and connecting applications using JDBC
  • SQL Mastery: Advanced SQL querying skills for complex data extraction, manipulation, and optimization
  • Big Data Handling: Experience in working with and processing large datasets efficiently
  • Data Streaming: Familiarity with data streaming concepts and technologies (e.g., Kafka, Spark Streaming) for processing continuous data flows
  • Data Analysis Libraries: Proficient in using data analysis libraries such as Pandas for data manipulation and exploration
  • Software Engineering Principles: Solid understanding of software engineering best practices, including version control (Git), testing, and code review
  • Problem-Solving: Intuitive problem-solver with a self-starter mindset and the ability to work independently and as part of a team
Job Responsibility
Job Responsibility
  • Develop, test, and deploy high-quality Python code for data migration, data profiling, and data processing
  • Design and implement scalable solutions for working with large and complex datasets, ensuring data integrity and performance
  • Utilize PySpark for distributed data processing and analytics on large-scale data platforms
  • Develop and optimize SQL queries for various database systems, including Oracle, to extract, transform, and load data efficiently
  • Integrate Python applications with JDBC-compliant databases (e.g., Oracle) for seamless data interaction
  • Implement data streaming solutions to process real-time or near real-time data efficiently
  • Perform in-depth data analysis using Python libraries, especially Pandas, to understand data characteristics, identify anomalies, and support profiling efforts
  • Collaborate with data architects, data engineers, and business stakeholders to understand requirements and translate them into technical specifications
  • Contribute to the design and architecture of data solutions, ensuring best practices in data management and engineering
  • Troubleshoot and resolve technical issues related to data pipelines, performance, and data quality
  • Fulltime
Read More
Arrow Right

Python Data Engineer

We are seeking a highly motivated and intuitive Python Developer to join our dyn...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4-7 years of relevant experience in the Financial Service industry
  • Strong Proficiency in Python: Excellent command of Python programming, including object-oriented principles, data structures, and algorithms
  • PySpark Experience: Demonstrated experience with PySpark for big data processing and analysis
  • Database Expertise: Proven experience working with relational databases, specifically Oracle, and connecting applications using JDBC
  • SQL Mastery: Advanced SQL querying skills for complex data extraction, manipulation, and optimization
  • Big Data Handling: Experience in working with and processing large datasets efficiently
  • Data Streaming: Familiarity with data streaming concepts and technologies (e.g., Kafka, Spark Streaming) for processing continuous data flows
  • Data Analysis Libraries: Proficient in using data analysis libraries such as Pandas for data manipulation and exploration
  • Software Engineering Principles: Solid understanding of software engineering best practices, including version control (Git), testing, and code review
  • Problem-Solving: Intuitive problem-solver with a self-starter mindset and the ability to work independently and as part of a team
Job Responsibility
Job Responsibility
  • Develop, test, and deploy high-quality Python code for data migration, data profiling, and data processing
  • Design and implement scalable solutions for working with large and complex datasets, ensuring data integrity and performance
  • Utilize PySpark for distributed data processing and analytics on large-scale data platforms
  • Develop and optimize SQL queries for various database systems, including Oracle, to extract, transform, and load data efficiently
  • Integrate Python applications with JDBC-compliant databases (e.g., Oracle) for seamless data interaction
  • Implement data streaming solutions to process real-time or near real-time data efficiently
  • Perform in-depth data analysis using Python libraries, especially Pandas, to understand data characteristics, identify anomalies, and support profiling efforts
  • Collaborate with data architects, data engineers, and business stakeholders to understand requirements and translate them into technical specifications
  • Contribute to the design and architecture of data solutions, ensuring best practices in data management and engineering
  • Troubleshoot and resolve technical issues related to data pipelines, performance, and data quality
  • Fulltime
Read More
Arrow Right

Data Engineer

We are seeking a Data Engineer to spearhead the architecture and optimization of...
Location
Location
Kenya , Nairobi
Salary
Salary:
Not provided
talentsafari.io Logo
Talent Safari
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Engineering, Computer Science, Data Science, or a relevant discipline
  • A minimum of 3 years of professional experience in Data Engineering or a similar technical role
  • Expert-level command of SQL and management systems like PostgreSQL or MySQL
  • Hands-on proficiency with pipeline tools such as Luigi, DBT, or Apache Airflow
  • Practical experience with heavy-lifting technologies like Hadoop, Spark, or Kafka
  • Proven skills with cloud data stacks, specifically Google BigQuery, AWS Redshift, or Azure Data Factory
  • Strong programming logic in Java, Scala, or Python for data processing tasks
  • Familiarity with data integration frameworks and API utilization
  • Understanding of security best practices and compliance frameworks
  • Exceptional problem-solving capabilities with a rigorous eye for detail
Job Responsibility
Job Responsibility
  • Architect and sustain scalable ETL workflows, guaranteeing consistency and accuracy across diverse data origins
  • Refine and optimize data models and database structures specifically tailored for reporting and analytics
  • Enforce industry best practices regarding data warehousing and storage methodologies
  • Fine-tune data systems to handle the demands of both real-time streams and batch processing
  • Oversee and manage the cloud data environment, utilizing platforms such as AWS, Azure, or GCP
  • Coordinate with software engineers to embed data solutions directly into our product suite
  • Design robust processes for ingesting both structured and unstructured datasets
  • Script automated quality checks and deploy monitoring instrumentation to instantly detect data anomalies
  • Build APIs and services that ensure seamless data interoperability between systems
  • Continuously monitor pipeline health, troubleshooting bottlenecks to maintain an uninterrupted data flow
  • Fulltime
Read More
Arrow Right

Data Operations Engineer

We're seeking an early-career data professional who can support development and ...
Location
Location
United States
Salary
Salary:
65000.00 - 90000.00 USD / Year
personifyhealth.com Logo
Personify Health
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2-3 years experience in data engineering, analytics engineering, or related technical role
  • AWS Certification (or willingness to obtain within 6-12 months), such as AWS Cloud Practitioner or AWS Developer – Associate
  • Experience handling support tickets or operational data issues strongly preferred (~50% of role)
  • Proficiency in Python (required) and SQL, including writing queries, joins, basic transformations, and troubleshooting
  • Hands-on experience with relational databases (PostgreSQL, Oracle, AWS RDS) and familiarity with basic data warehouse concepts
  • Understanding of ETL/ELT pipelines, data validation, and data quality monitoring
  • Basic knowledge of Linux command line for navigating servers and running scripts
  • Some exposure to cloud environments (AWS, Azure) preferred but not required
  • Familiarity with JIRA and Git/Bitbucket for version control and task management
  • Effective written and verbal communication skills with ability to document findings and processes
Job Responsibility
Job Responsibility
  • Support data pipelines: Assist in maintaining and troubleshooting ETL/ELT data pipelines used for healthcare and TPA claims processing across on-prem and cloud environments
  • Handle operational support: Manage support tickets (~50% of time), responding to user requests, researching data questions, and helping resolve operational data problems efficiently
  • Work with core technologies: Use Python and SQL to support data extraction, transformation, validation, and loading while monitoring pipeline performance and resolving data issues
  • Monitor and troubleshoot: Review logs, investigate failed jobs, and correct data discrepancies while supporting daily process monitoring including production processes and application performance
  • Maintain data quality: Execute routine data quality checks, maintain documentation, and follow up on accuracy concerns to ensure reliable data across systems
  • Support database operations: Work with data management tasks in systems such as PostgreSQL, Oracle, and cloud-based databases while learning healthcare data formats
  • Collaborate cross-functionally: Partner with Data Analysts, Developers, and business users to understand data needs and support ongoing reporting and data operations
  • Continue learning: Participate in team meetings, sprint activities, and knowledge-sharing sessions while working with senior team members to develop data engineering skills
What we offer
What we offer
  • Comprehensive medical and dental coverage through our own health solutions
  • Mental health support and wellness programs designed by experts who get it
  • Flexible work arrangements that fit your life
  • Retirement planning support to help you build real wealth for the future
  • Basic Life and AD&D Insurance plus Short-Term and Long-Term Disability protection
  • Employee savings programs and voluntary benefits like Critical Illness and Hospital Indemnity coverage
  • Professional development opportunities and clear career progression paths
  • Mentorship from industry leaders who want to see you succeed
  • Learning budget to invest in skills that matter to your future
  • Unlimited PTO policy
  • Fulltime
Read More
Arrow Right

Celonis Technical Lead

Sopra Steria, a leading tech company in Europe, is hiring a Celonis Technical Le...
Location
Location
India , Noida
Salary
Salary:
Not provided
https://www.soprasteria.com Logo
Sopra Steria
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Sound knowledge and experience of Process Mining using Celonis
  • Strong experience in programming, preferably Vertica SQL/PQL and Python
  • Experience handling large datasets
  • Working knowledge of data models and data structures
  • Technical expertise with data mining
  • Experience with Time Series Data
  • Ability to codify process into step by step linear commands
  • Experience with data visualization tools such as Power BI and Tableau
  • Professional experience writing performant SQL queries and improving existing code
  • Experience working with relational and non-relational databases
Job Responsibility
Job Responsibility
  • Implementation projects for clients from various industries that process data at various levels of complexity
  • Translate complex functional and technical requirements into data models
  • Setup of process data extractions including table and field mappings
  • Estimating and modeling memory requirements for data processing
  • Prepare and connect to On-premise/Cloud source system, extract and transform customer data, and develop process- and customer-specific studies
  • Solicit requirements for Business Process Mining models, including what data they will utilise and how the organisation will use them when they are built
  • Building accurate, reliable, and informative business process mining models will enable our company to expand even more quickly
  • Build the infrastructure required for optimal extraction, transformation and loading of data from disparate data sources
  • Applying analytics and modelling will enable us to own and actively drive process improvement projects and initiatives within the relevant function
  • Maintaining our familiarity with the Celonis platform will require us to write documentation on its technical procedures and processes
What we offer
What we offer
  • Inclusive and respectful work environment
  • Open to people with disabilities
  • Fulltime
Read More
Arrow Right

Staff Data Ops Engineer - Platform

We are looking for a Staff Data Ops Engineer - Platform to join the Data & AI Pl...
Location
Location
France , Paris
Salary
Salary:
Not provided
doctolib.fr Logo
Doctolib
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience after graduation as a Staff Data Platform Engineer, or Staff Data Ops or Staff Site Reliability Engineer or in a similar role, with a history of architecting and scaling robust data platforms
  • Extensive experience with Google Cloud Platform and a command of Kubernetes & Terraform for automated deployments
  • Authority on implementing network and IAM security best practices
  • Deep technical proficiency in orchestrating data pipelines using Airflow or Dagster, deploying applications to the cloud, and leveraging modern data warehouses such as BigQuery
  • Highly skilled in programming with Python, and have a solid understanding of software development principles
  • Excellent troubleshooter who excels at diagnosing and fixing data infrastructure and identifying performance bottlenecks
  • Strong communicator who can articulate complex technical concepts to both technical and non-technical audiences
Job Responsibility
Job Responsibility
  • Design and implement enterprise-scale data infrastructure strategies, conducting thorough impact and cost analysis for major technical decisions, and establishing architectural standards across the organization
  • Build and optimize complex, multi-region data pipelines handling petabyte-scale datasets, ensuring 99.9% reliability and implementing advanced monitoring and alerting systems
  • Lead cost analysis initiatives, identify optimization opportunities across our data stack, and implement solutions that reduce infrastructure spend while improving performance and reliability
  • Provide technical guidance to data engineers and cross-functional teams, conduct architecture reviews, and drive adoption of best practices in DataOps, security, and governance
  • Evaluate emerging technologies, conduct proof-of-concepts for new data tools and platforms, and lead the technical roadmap for data infrastructure modernization
What we offer
What we offer
  • Free comprehensive health insurance for you and your children
  • Parent Care Program: receive one additional month of leave on top of the legal parental leave
  • Free mental health and coaching services through our partner Moka.care
  • For caregivers and workers with disabilities, a package including an adaptation of the remote policy, extra days off for medical reasons, and psychological support
  • Work from EU countries and the UK for up to 10 days per year, thanks to our flexibility days policy
  • Work Council subsidy to refund part of sport club membership or creative class
  • Up to 14 days of RTT
  • A subsidy from the work council to refund part of the membership to a sport club or a creative class
  • Lunch voucher with Swile card
  • Fulltime
Read More
Arrow Right