CrawlJobs Logo

Command and Data Handling Engineer

boeing.com Logo

Boeing

Location Icon

Location:
United States , El Segundo

Category Icon

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

124100.00 - 247250.00 USD / Year

Job Description:

At Boeing, we innovate and collaborate to make the world a better place. We're committed to fostering an environment for every teammate that's welcoming, respectful and inclusive, with great opportunity for professional growth. Find your future with us. Boeing Defense, Space & Security (BDS), Space Mission Systems seeks an Experienced, Lead or Senior Command and Data Handling Engineer (C&DH) (Level 3, 4 or 5) to develop and manage C&DH and Telemetry and Command Digital (T&C) Subsystems for Boeing secured satellite network program in El Segundo, CA. C&DH Subsystem engineering assignments include requirements definition and flow down, systems design and analysis, systems integration, test, and verification. You will resolve technical issues and ensure hardware and software are flight worthy prior to delivery. Ability to work collaboratively in multidisciplinary teams is crucial. You will work with management and experts across multiple disciplines including subsystem engineering, software architecture, software validation, Communications Security (COMSEC) design, Transmission Security (TRANSEC) design, electronic hardware testing, spacecraft level testing, and end-to-end Hardware and Software Integration system engineering. Some familiarity with system modeling and simulation tools, including Model-Based Systems Engineering (MBSE) tools (e.g., SysML, Cameo) will be a plus.

Job Responsibility:

  • Gather, define, and document system level requirements to support flight control and mission or trajectory requirements definition
  • Provide technical, cost, and schedule status to program and function
  • Participate and support with cost and schedule reduction initiatives
  • Develop and maintain comprehensive subsystem specifications and unit specs with REA support
  • Work on subsystem-to-subsystem interface requirements and C&DH subsystem hardware Subcontract/Contract Data Requirements List (SDRLs/CDRLs)
  • Create and release subsystem level analyses such as traffic analysis, TM accuracy, sampling, command pacing
  • Perform Subsystem Architecture (including Block Diagram) and technology insertion (including COMSEC Roadmap)
  • Assure COMSEC implementation and CNSSP-12 compliance including cybersecurity and information assurance
  • Prepare and present C&DH subsystem design reviews (PDR, CDR, etc.) for customer and Program SEIT interface
  • Input to Spacecraft Database, FSW, and Flight ROPs
  • Lead (level 5) or participate in (level 4) C&DH Subsystem In-orbit anomaly investigations and support to Customer Operations Service Center
  • Define and maintain satellite C&DH process writings (DMS guides)

Requirements:

  • Bachelor of Science degree in engineering, Engineering Technology (including Manufacturing Technology), Computer Science, Data Science, Mathematics, Physics, Chemistry or non-US equivalent qualifications directly related to the work statement
  • 5+ years of experience working in aerospace, defense, or similar high-tech industry
  • Foundational knowledge of networking principles including TCP/IP, routing, switching, and network protocols relevant to aerospace systems
  • Willing to travel 10% of the time domestically and internationally

Nice to have:

  • Experience developing solutions to complex technical problems
  • Experience working with CCSDS and IP (Internet Protocol)
  • Experience working with MIL-STD-1553, SpaceWire, CAN, LVDS or Ethernet
  • Experience in MATLAB, Python, and VBA
  • Strong understanding of network engineering concepts including network architecture design, network security, and network troubleshooting
  • Experience with network simulation and analysis tools
  • Familiarity with satellite communication network and protocols
  • Knowledge of cybersecurity best practices for networked embedded systems
  • Basic understanding of the Model-Based Systems Engineering (MBSE) principles and tools to support system design, analysis, and verification is a plus
What we offer:
  • Generous company match to your 401(k)
  • Industry-leading tuition assistance program pays your institution directly
  • Fertility, adoption, and surrogacy benefits
  • Up to $10,000 gift match when you support your favorite nonprofit organizations
  • health insurance
  • flexible spending accounts
  • health savings accounts
  • retirement savings plans
  • life and disability insurance programs
  • paid and unpaid time away from work

Additional Information:

Job Posted:
May 05, 2026

Expiration:
May 16, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Command and Data Handling Engineer

Senior Data Engineer

We are looking for a Data Engineer to join our team and support with designing, ...
Location
Location
Salary
Salary:
Not provided
foundever.com Logo
Foundever
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Minimum of 7 years plus experience in data engineering
  • Track record of deploying and maintaining complex data systems at an enterprise level within regulated environments
  • Expertise in implementing robust data security measures, access controls, and monitoring systems
  • Proficiency in data modeling and database management
  • Strong programming skills in Python and SQL
  • Knowledge of big data technologies like Hadoop, Spark, and NoSQL databases
  • Deep experience with ETL processes and data pipeline development
  • Strong understanding of data warehousing concepts and best practices
  • Experience with cloud platforms such as AWS and Azure
  • Excellent problem-solving skills and attention to detail
Job Responsibility
Job Responsibility
  • Design and optimize complex data storage solutions, including data warehouses and data lakes
  • Develop, automate, and maintain data pipelines for efficient and scalable ETL processes
  • Ensure data quality and integrity through data validation, cleansing, and error handling
  • Collaborate with data analysts, machine learning engineers, and software engineers to deliver relevant datasets or data APIs for downstream applications
  • Implement data security measures and access controls to protect sensitive information
  • Monitor data infrastructure for performance and reliability, addressing issues promptly
  • Stay abreast of industry trends and emerging technologies in data engineering
  • Document data pipelines, processes, and best practices for knowledge sharing
  • Lead data governance and compliance efforts to meet regulatory requirements
  • Collaborate with cross-functional teams to drive data-driven decision-making within the organization
What we offer
What we offer
  • Impactful work
  • Professional growth
  • Competitive compensation
  • Collaborative environment
  • Attractive salary and benefits package
  • Continuous learning and development opportunities
  • A supportive team culture with opportunities for occasional travel for training and industry events
Read More
Arrow Right

Backend Software Engineer - Reference Data Services

The role is for an experienced Software Engineer on the FACT Team at Clear Stree...
Location
Location
United States , New York
Salary
Salary:
200000.00 - 250000.00 USD / Year
clearstreet.io Logo
Clear Street
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least eight (8) years of professional experience implementing highly scalable services (we implement our code in Golang)
  • Confidence in designing and building flexible APIs which enable a microservice architecture to reliably deliver consistent data
  • Contributed to systems that deliver solutions to complex business problems that handle massive amounts of data
  • Drawn towards scale, distributed systems, and associated technologies
  • Strong command over object-oriented design patterns, data structures, and algorithms
  • Communicate technical ideas with ease and always look to collaborate to deliver high quality products
  • Experience will help you mentor team members, define our engineering standards, and drive a system design approach to building new services
Job Responsibility
Job Responsibility
  • Work with a team of passionate and highly collaborative engineers to build out our core Platform
  • Own the design and implementation of new features and services
  • Turn the complexity of processing financial transactions across various asset classes into highly scalable services
  • Tackle non trivial problems that will challenge you to flex your system design muscles, balance trade offs, and implement clean efficient code
  • As a voice of experience in the team, you will help mentor teammates, evolve our technical standards and best practices, and further our culture of system designs
What we offer
What we offer
  • Competitive compensation packages
  • Company equity
  • 401k matching
  • Gender neutral parental leave
  • Full medical, dental and vision insurance
  • Lunch stipends
  • Fully stocked kitchens
  • Happy hours
  • A great location
  • Amazing views
  • Fulltime
Read More
Arrow Right

Pyspark Data Engineer

We are seeking a highly motivated and intuitive Python Developer to join our dyn...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4-7 years of relevant experience in the Financial Service industry
  • Strong Proficiency in Python: Excellent command of Python programming, including object-oriented principles, data structures, and algorithms
  • PySpark Experience: Demonstrated experience with PySpark for big data processing and analysis
  • Database Expertise: Proven experience working with relational databases, specifically Oracle, and connecting applications using JDBC
  • SQL Mastery: Advanced SQL querying skills for complex data extraction, manipulation, and optimization
  • Big Data Handling: Experience in working with and processing large datasets efficiently
  • Data Streaming: Familiarity with data streaming concepts and technologies (e.g., Kafka, Spark Streaming) for processing continuous data flows
  • Data Analysis Libraries: Proficient in using data analysis libraries such as Pandas for data manipulation and exploration
  • Software Engineering Principles: Solid understanding of software engineering best practices, including version control (Git), testing, and code review
  • Problem-Solving: Intuitive problem-solver with a self-starter mindset and the ability to work independently and as part of a team
Job Responsibility
Job Responsibility
  • Develop, test, and deploy high-quality Python code for data migration, data profiling, and data processing
  • Design and implement scalable solutions for working with large and complex datasets, ensuring data integrity and performance
  • Utilize PySpark for distributed data processing and analytics on large-scale data platforms
  • Develop and optimize SQL queries for various database systems, including Oracle, to extract, transform, and load data efficiently
  • Integrate Python applications with JDBC-compliant databases (e.g., Oracle) for seamless data interaction
  • Implement data streaming solutions to process real-time or near real-time data efficiently
  • Perform in-depth data analysis using Python libraries, especially Pandas, to understand data characteristics, identify anomalies, and support profiling efforts
  • Collaborate with data architects, data engineers, and business stakeholders to understand requirements and translate them into technical specifications
  • Contribute to the design and architecture of data solutions, ensuring best practices in data management and engineering
  • Troubleshoot and resolve technical issues related to data pipelines, performance, and data quality
  • Fulltime
Read More
Arrow Right

Python Data Engineer

We are seeking a highly motivated and intuitive Python Developer to join our dyn...
Location
Location
India , Chennai
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4-7 years of relevant experience in the Financial Service industry
  • Strong Proficiency in Python: Excellent command of Python programming, including object-oriented principles, data structures, and algorithms
  • PySpark Experience: Demonstrated experience with PySpark for big data processing and analysis
  • Database Expertise: Proven experience working with relational databases, specifically Oracle, and connecting applications using JDBC
  • SQL Mastery: Advanced SQL querying skills for complex data extraction, manipulation, and optimization
  • Big Data Handling: Experience in working with and processing large datasets efficiently
  • Data Streaming: Familiarity with data streaming concepts and technologies (e.g., Kafka, Spark Streaming) for processing continuous data flows
  • Data Analysis Libraries: Proficient in using data analysis libraries such as Pandas for data manipulation and exploration
  • Software Engineering Principles: Solid understanding of software engineering best practices, including version control (Git), testing, and code review
  • Problem-Solving: Intuitive problem-solver with a self-starter mindset and the ability to work independently and as part of a team
Job Responsibility
Job Responsibility
  • Develop, test, and deploy high-quality Python code for data migration, data profiling, and data processing
  • Design and implement scalable solutions for working with large and complex datasets, ensuring data integrity and performance
  • Utilize PySpark for distributed data processing and analytics on large-scale data platforms
  • Develop and optimize SQL queries for various database systems, including Oracle, to extract, transform, and load data efficiently
  • Integrate Python applications with JDBC-compliant databases (e.g., Oracle) for seamless data interaction
  • Implement data streaming solutions to process real-time or near real-time data efficiently
  • Perform in-depth data analysis using Python libraries, especially Pandas, to understand data characteristics, identify anomalies, and support profiling efforts
  • Collaborate with data architects, data engineers, and business stakeholders to understand requirements and translate them into technical specifications
  • Contribute to the design and architecture of data solutions, ensuring best practices in data management and engineering
  • Troubleshoot and resolve technical issues related to data pipelines, performance, and data quality
  • Fulltime
Read More
Arrow Right

Data Engineer

We are seeking a Data Engineer to spearhead the architecture and optimization of...
Location
Location
Kenya , Nairobi
Salary
Salary:
Not provided
talentsafari.io Logo
Talent Safari
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Engineering, Computer Science, Data Science, or a relevant discipline
  • A minimum of 3 years of professional experience in Data Engineering or a similar technical role
  • Expert-level command of SQL and management systems like PostgreSQL or MySQL
  • Hands-on proficiency with pipeline tools such as Luigi, DBT, or Apache Airflow
  • Practical experience with heavy-lifting technologies like Hadoop, Spark, or Kafka
  • Proven skills with cloud data stacks, specifically Google BigQuery, AWS Redshift, or Azure Data Factory
  • Strong programming logic in Java, Scala, or Python for data processing tasks
  • Familiarity with data integration frameworks and API utilization
  • Understanding of security best practices and compliance frameworks
  • Exceptional problem-solving capabilities with a rigorous eye for detail
Job Responsibility
Job Responsibility
  • Architect and sustain scalable ETL workflows, guaranteeing consistency and accuracy across diverse data origins
  • Refine and optimize data models and database structures specifically tailored for reporting and analytics
  • Enforce industry best practices regarding data warehousing and storage methodologies
  • Fine-tune data systems to handle the demands of both real-time streams and batch processing
  • Oversee and manage the cloud data environment, utilizing platforms such as AWS, Azure, or GCP
  • Coordinate with software engineers to embed data solutions directly into our product suite
  • Design robust processes for ingesting both structured and unstructured datasets
  • Script automated quality checks and deploy monitoring instrumentation to instantly detect data anomalies
  • Build APIs and services that ensure seamless data interoperability between systems
  • Continuously monitor pipeline health, troubleshooting bottlenecks to maintain an uninterrupted data flow
  • Fulltime
Read More
Arrow Right

Data Operations Engineer

We're seeking an early-career data professional who can support development and ...
Location
Location
United States
Salary
Salary:
65000.00 - 90000.00 USD / Year
personifyhealth.com Logo
Personify Health
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2-3 years experience in data engineering, analytics engineering, or related technical role
  • AWS Certification (or willingness to obtain within 6-12 months), such as AWS Cloud Practitioner or AWS Developer – Associate
  • Experience handling support tickets or operational data issues strongly preferred (~50% of role)
  • Proficiency in Python (required) and SQL, including writing queries, joins, basic transformations, and troubleshooting
  • Hands-on experience with relational databases (PostgreSQL, Oracle, AWS RDS) and familiarity with basic data warehouse concepts
  • Understanding of ETL/ELT pipelines, data validation, and data quality monitoring
  • Basic knowledge of Linux command line for navigating servers and running scripts
  • Some exposure to cloud environments (AWS, Azure) preferred but not required
  • Familiarity with JIRA and Git/Bitbucket for version control and task management
  • Effective written and verbal communication skills with ability to document findings and processes
Job Responsibility
Job Responsibility
  • Support data pipelines: Assist in maintaining and troubleshooting ETL/ELT data pipelines used for healthcare and TPA claims processing across on-prem and cloud environments
  • Handle operational support: Manage support tickets (~50% of time), responding to user requests, researching data questions, and helping resolve operational data problems efficiently
  • Work with core technologies: Use Python and SQL to support data extraction, transformation, validation, and loading while monitoring pipeline performance and resolving data issues
  • Monitor and troubleshoot: Review logs, investigate failed jobs, and correct data discrepancies while supporting daily process monitoring including production processes and application performance
  • Maintain data quality: Execute routine data quality checks, maintain documentation, and follow up on accuracy concerns to ensure reliable data across systems
  • Support database operations: Work with data management tasks in systems such as PostgreSQL, Oracle, and cloud-based databases while learning healthcare data formats
  • Collaborate cross-functionally: Partner with Data Analysts, Developers, and business users to understand data needs and support ongoing reporting and data operations
  • Continue learning: Participate in team meetings, sprint activities, and knowledge-sharing sessions while working with senior team members to develop data engineering skills
What we offer
What we offer
  • Comprehensive medical and dental coverage through our own health solutions
  • Mental health support and wellness programs designed by experts who get it
  • Flexible work arrangements that fit your life
  • Retirement planning support to help you build real wealth for the future
  • Basic Life and AD&D Insurance plus Short-Term and Long-Term Disability protection
  • Employee savings programs and voluntary benefits like Critical Illness and Hospital Indemnity coverage
  • Professional development opportunities and clear career progression paths
  • Mentorship from industry leaders who want to see you succeed
  • Learning budget to invest in skills that matter to your future
  • Unlimited PTO policy
  • Fulltime
Read More
Arrow Right

Celonis Technical Lead

Sopra Steria, a leading tech company in Europe, is hiring a Celonis Technical Le...
Location
Location
India , Noida
Salary
Salary:
Not provided
https://www.soprasteria.com Logo
Sopra Steria
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Sound knowledge and experience of Process Mining using Celonis
  • Strong experience in programming, preferably Vertica SQL/PQL and Python
  • Experience handling large datasets
  • Working knowledge of data models and data structures
  • Technical expertise with data mining
  • Experience with Time Series Data
  • Ability to codify process into step by step linear commands
  • Experience with data visualization tools such as Power BI and Tableau
  • Professional experience writing performant SQL queries and improving existing code
  • Experience working with relational and non-relational databases
Job Responsibility
Job Responsibility
  • Implementation projects for clients from various industries that process data at various levels of complexity
  • Translate complex functional and technical requirements into data models
  • Setup of process data extractions including table and field mappings
  • Estimating and modeling memory requirements for data processing
  • Prepare and connect to On-premise/Cloud source system, extract and transform customer data, and develop process- and customer-specific studies
  • Solicit requirements for Business Process Mining models, including what data they will utilise and how the organisation will use them when they are built
  • Building accurate, reliable, and informative business process mining models will enable our company to expand even more quickly
  • Build the infrastructure required for optimal extraction, transformation and loading of data from disparate data sources
  • Applying analytics and modelling will enable us to own and actively drive process improvement projects and initiatives within the relevant function
  • Maintaining our familiarity with the Celonis platform will require us to write documentation on its technical procedures and processes
What we offer
What we offer
  • Inclusive and respectful work environment
  • Open to people with disabilities
  • Fulltime
Read More
Arrow Right
New

Analytics Engineer

We are seeking an experienced and versatile Analytics Engineer to join our dynam...
Location
Location
Canada
Salary
Salary:
81990.00 - 91100.00 CAD / Year
tucows.com Logo
Tucows
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2+ years of experience in data analytics or a related field, with significant exposure to AI and Machine Learning applications in analytics
  • Advanced SQL skills with experience in writing and optimizing complex queries on large-scale datasets
  • Hands-on experience with dbt (Data Build Tool) and its features for building, testing, and documenting data models
  • Expert-level knowledge of data modeling and data warehouse concepts (e.g., star schema, normalization, slowly changing dimensions)
  • Experience with Snowflake's Data Cloud platform and familiarity with its advanced AI capabilities (Snowflake Intelligence – Cortex Analyst, Cortex Agents, Cortex Search, AISQL, etc.) is highly preferred
  • Strong skills in Looker data visualization and LookML (including familiarity with Looker's conversational AI and data agent capabilities) or similar BI tools
  • Experience with AI agents or generative AI tools to optimize workflows and service delivery (such as creating chatbots or automated analytic assistants) is a plus
  • Experience with real-time data processing and streaming technologies (e.g., Kafka, Kinesis, Spark Streaming) for handling continuous data flows
  • Proficient in Python for data analysis and manipulation (pandas, NumPy, etc.), with the ability to write clean, efficient code. Experienced with shell scripting and command-line tools for automating workflows and data processing tasks
  • Familiarity with ETL processes and workflow orchestration tools like Apache Airflow (or similar scheduling tools) for automating data pipelines alongside Docker for local development and testing
Job Responsibility
Job Responsibility
  • Design, develop, and maintain complex data models in our Snowflake data warehouse. Utilize dbt (Data Build Tool) to create efficient data pipelines and transformations for our data platform
  • Leverage Snowflake Intelligence features (e.g., Cortex Analyst, Cortex Agents, Cortex Search, AISQL) to implement conversational data queries and AI-driven insights directly within our data environment. Develop AI solutions that harness these capabilities to extract valuable business insights
  • Design and build advanced SQL queries to retrieve and manipulate complex data sets. Dive deep into large datasets to uncover patterns, trends, and opportunities that inform strategic decision-making
  • Develop, maintain, and optimize Looker dashboards and LookML to effectively communicate data insights. Leverage Looker's conversational analytics and data agent features to enable stakeholders to interact with data using natural language queries
  • Communicate effectively with stakeholders to understand business requirements and deliver data-driven solutions. Identify opportunities for implementing AI/ML/NLP technologies in collaboration with product, engineering, and business teams
  • Write efficient Python code for data analysis, data processing, and automation of recurring tasks. Skilled in shell scripting and command-line tools to support data workflows and system tasks. Ensure code is well-tested and integrated into automated workflows (e.g., via Airflow job scheduling)
  • Create compelling visualizations and presentations to deliver analytical insights and actionable recommendations to senior management and cross-functional teams. Tailor communication of complex analyses to diverse audiences
  • Stay up-to-date with industry trends, emerging tools, and best practices in data engineering and analytics (with a focus on dbt features, Snowflake's latest offerings and BI innovations). Develop and implement innovative ideas to continuously improve our analytics stack and practices
What we offer
What we offer
  • fair compensation
  • generous benefits
  • Fulltime
Read More
Arrow Right