CrawlJobs Logo

Data Security Engineer, Mid

boozallen.com Logo

Booz Allen Hamilton

Location Icon

Location:
United States , Fort Meade

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

69400.00 - 158000.00 USD / Year

Job Description:

Architect, deploy, and configure data security solutions across various clients for DoD, IC, and civilian federal clients. Create new architectures to meet client requirements adhering to Zero Trust best practices and IC data header guidelines. Interface with key stakeholders, including agency personnel and internal delivery and engineering teams. Assist in building custom policy to ensure positive control of data across hybrid cloud environments.

Job Responsibility:

  • Architect, deploy, and configure data security solutions across various clients for DoD, IC, and civilian federal clients
  • Create new architectures to meet client requirements adhering to Zero Trust best practices and IC data header guidelines
  • Interface with key stakeholders, including agency personnel and internal delivery and engineering teams
  • Assist in building custom policy to ensure positive control of data across hybrid cloud environments

Requirements:

  • Experience in a professional work environment
  • Knowledge of Data Loss Prevention concepts and capabilities
  • Knowledge of Data Protection and Data Security concepts
  • Knowledge of policy creation, deployment and troubleshooting
  • Knowledge of container services such as Kubernetes
  • Knowledge of documenting and diagraming technical architectures
  • Ability to demonstrate a desire to learn new capabilities across the data protection market
  • Secret clearance
  • HS diploma or GED

Nice to have:

  • Knowledge of Trusted Data Format (TDF) and Attribute-Based Access Control (ABAC) concepts
  • Knowledge of Virtru Data Security Platform (DSP) or identity solutions such as Keycloak
  • Knowledge of federal, DoD, or IC environments or compliance frameworks
  • Bachelor’s degree in IT, Cybersecurity, Engineering, or a related field
What we offer:
  • Health, life, disability, financial, and retirement benefits
  • Paid leave
  • Professional development
  • Tuition assistance
  • Work-life programs
  • Dependent care
  • Recognition awards program

Additional Information:

Job Posted:
March 21, 2026

Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Data Security Engineer, Mid

Senior Data Engineer

We’re hiring a Senior Data Engineer with strong experience in AWS and Databricks...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
appen.com Logo
Appen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5-7 years of hands-on experience with AWS data engineering technologies, such as Amazon Redshift, AWS Glue, AWS Data Pipeline, Amazon Kinesis, Amazon RDS, and Apache Airflow
  • Hands-on experience working with Databricks, including Delta Lake, Apache Spark (Python or Scala), and Unity Catalog
  • Demonstrated proficiency in SQL and NoSQL databases, ETL tools, and data pipeline workflows
  • Experience with Python, and/or Java
  • Deep understanding of data structures, data modeling, and software architecture
  • Strong problem-solving skills and attention to detail
  • Self-motivated and able to work independently, with excellent organizational and multitasking skills
  • Exceptional communication skills, with the ability to explain complex data concepts to non-technical stakeholders
  • Bachelor's Degree in Computer Science, Information Systems, or a related field. A Master's Degree is preferred.
Job Responsibility
Job Responsibility
  • Design, build, and manage large-scale data infrastructures using a variety of AWS technologies such as Amazon Redshift, AWS Glue, Amazon Athena, AWS Data Pipeline, Amazon Kinesis, Amazon EMR, and Amazon RDS
  • Design, develop, and maintain scalable data pipelines and architectures on Databricks using tools such as Delta Lake, Unity Catalog, and Apache Spark (Python or Scala), or similar technologies
  • Integrate Databricks with cloud platforms like AWS to ensure smooth and secure data flow across systems
  • Build and automate CI/CD pipelines for deploying, testing, and monitoring Databricks workflows and data jobs
  • Continuously optimize data workflows for performance, reliability, and security, applying Databricks best practices around data governance and quality
  • Ensure the performance, availability, and security of datasets across the organization, utilizing AWS’s robust suite of tools for data management
  • Collaborate with data scientists, software engineers, product managers, and other key stakeholders to develop data-driven solutions and models
  • Translate complex functional and technical requirements into detailed design proposals and implement them
  • Mentor junior and mid-level data engineers, fostering a culture of continuous learning and improvement within the team
  • Identify, troubleshoot, and resolve complex data-related issues
  • Fulltime
Read More
Arrow Right

Data Engineer - II

The Data Engineer will design, develop, and maintain scalable data pipelines and...
Location
Location
India , Pune
Salary
Salary:
Not provided
aticaglobal.com Logo
Atica Global
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in computer science, Engineering, Mathematics, a related field, or equivalent practical experience
  • 3-5 years of experience in data engineering or a similar mid-level role
  • Proficiency in Python and SQL
  • experience with Java is a plus
  • Hands-on experience with AWS, Airbyte, DBT, PostgreSQL, MongoDB, Airflow, and Spark
  • Familiarity with data storage solutions such as PostgreSQL, MongoDB
  • Experience with BigQuery (setup, management and scaling)
  • Strong understanding of data modeling, ETL/ELT processes, and database systems
  • Experience with data extraction, batch processing and data warehousing
  • Excellent problem-solving skills and a keen attention to detail
Job Responsibility
Job Responsibility
  • Design, develop, and maintain scalable data pipelines and ETL/ELT processes using tools like Airflow, Airbyte and PySpark
  • Collaborate with software engineers and analysts to ensure data availability and integrity for various applications
  • Design and implement robust data pipelines to extract, transform, and load (ETL) data from various sources
  • Utilize Airflow for orchestrating complex workflows and managing data pipelines
  • Implement batch processing techniques using Airflow/PySpark to handle large volumes of data efficiently
  • Develop ELT processes to optimize data extraction and transformation within the target data warehouse
  • Leverage AWS services (e.g., S3, RDS, Lambda) for data storage, processing, and orchestration
  • Ensure data security, reliability, and performance when utilizing AWS resources
  • Work closely with developers, analysts, and other stakeholders to understand data requirements and provide the necessary data infrastructure
  • Assist in troubleshooting and optimizing existing data workflows and queries
What we offer
What we offer
  • Competitive salary and benefits package
  • Comprehensive Health Care benefits (best in the country, includes IPD+OPD, covers Employee, Spouse and two children)
  • Growth and advancement opportunities within a rapidly expanding company
  • Fulltime
Read More
Arrow Right

Network Engineer Mid Level

Location
Location
United States , Annapolis Junction
Salary
Salary:
93000.00 - 150000.00 USD / Year
elevi.net Logo
ELEVI Associates
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Willing to work in the Annapolis Junction, MD area
  • Current or active security clearance with a polygraph
  • Over 19+ years of demonstrated experience in planning and leading Systems Engineering efforts
  • Bachelor's degree in System Engineering, Computer Science, Information Systems, Engineering Science, Engineering Management, or related discipline from an accredited college or university (Five (5) years of additional SE experience may be substituted for a Bachelor's degree)
  • IAT Level 2
  • CompTIA Security+ CE
  • Excellent communication and interpersonal skills, with a customer-focused approach to service delivery
  • Experience with data replication, backup, and disaster recovery strategies in both cloud and on-premises environments
  • Expertise in migrating data to Microsoft 365 (M365) and OneDrive, with a solid understanding of cloud migration methodologies
  • In-depth knowledge of NetApp storage systems, including configuration, administration, and performance tuning
Job Responsibility
Job Responsibility
  • Participate in daily network and operational support
  • Lead the migration of data to Microsoft 365 (M365) and OneDrive, ensuring seamless integration and minimal disruption to business operations
  • Design and implement strategies for migrating data from on-premises NetApp storage to M365 and OneDrive, adhering to best practices and security standards
  • Manage and maintain global NetApp storage solutions, including configuration, optimization, and troubleshooting
  • Monitor storage performance and capacity utilization, implementing proactive measures to ensure scalability and reliability
  • Collaborate with cross-functional teams to gather requirements, define project scopes, and deliver storage solutions aligned with business objectives
  • Provide technical expertise and support for storage-related issues, including incident resolution and root cause analysis
  • Develop and maintain documentation, including system configurations, procedures, and operational guidelines
  • Implement and enforce data management policies, ensuring compliance with regulatory requirements and internal standards
  • Stay current with industry trends and emerging technologies related to storage and cloud computing, evaluating and recommending enhancements as appropriate
What we offer
What we offer
  • Healthcare
  • Wellness
  • Financial
  • Retirement
  • Family support
  • Continuing education
  • Time off benefits
  • Competitive compensation
  • Learning and development opportunities
  • Work/life benefits
  • Fulltime
Read More
Arrow Right

Principal Data Engineer

The Principal Data Engineer will lead the design, development, and optimization ...
Location
Location
United States , Washington DC; Philadelphia PA; Wilmington DE
Salary
Salary:
113200.00 - 146664.00 USD / Year
amtrak.com Logo
AMTRAK
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s Degree or equivalent combination of education, training and/or relevant experience
  • Plus 7 years of relevant work experience
  • Proficiency and hands on experience designing and implementing end-to-end data solutions in AWS (e.g., S3, EMR, Glue, Redshift, Kenesis) and/or Azure (e.g. Azure Data Factory, Synaps Analytics, Azure Data Lakre Storage)
  • Experience with some of the following technologies, Databricks, Python, Apche Sparks, IDMC, Talend, CI/CD pipelines, Jenkins, Gitlab, Power BI, SQL and NoSQL
Job Responsibility
Job Responsibility
  • Architect and Design: Define and implement the architectural strategy for our enterprise data platform, focusing on scalability, security, and performance. Design and build robust, high-volume, and performant data pipelines using cloud-native services like AWS Data Pipelines (Glue, EMR, S3, Redshift, etc.) or Azure Data Factory/Synapse Analytics
  • Technical Leadership: Act as a subject matter expert and mentor for junior and mid-level data engineers, setting best practices for code quality, testing, and deployment
  • Data Governance & Management: Utilize tools like IDMC (Informatica Data Management Cloud) for data integration, quality, governance, and cataloging across the enterprise
  • Data Processing and Analysis: Develop, optimize, and manage large-scale data processing jobs using Databricks(Spark/Delta Lake) for ETL/ELT workflows and advanced analytics
  • Coding and Scripting: Write high-quality, efficient, and well-documented code primarily in Python for data manipulation, automation, and pipeline orchestration
  • Deployment and Automation: Implement and maintain robust CI/CD pipelines and infrastructure-as-code (e.g., Terraform/CloudFormation) for automated deployment and management of data solutions
  • Business Intelligence: Ensure data readiness for reporting and analytics and possess working knowledge/experience with BI tools like Tableau and PowerBI to facilitate data consumption
  • Performance and Optimization: Monitor, troubleshoot, and tune existing data infrastructure and pipelines to ensure optimal performance and cost efficiency
What we offer
What we offer
  • health, dental, and vision plans
  • health savings accounts
  • wellness programs
  • flexible spending accounts
  • 401K retirement plan with employer match
  • life insurance
  • short and long term disability insurance
  • paid time off
  • back-up care
  • adoption assistance
  • Fulltime
Read More
Arrow Right

Principal Software Engineer, Trusted Data Platform

As a Principal Software Engineer, you will be a technical leader and hands-on co...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related technical field
  • 10+ years of experience in backend software development, focusing on distributed systems and storage solutions
  • 5+ years of experience working with AWS storage services (S3, DynamoDB, EBS, EFS, FSx, Glacier)
  • Strong expertise in system design, architecture, and scalability for large-scale storage solutions
  • Proficiency in at least one major backend programming language (Kotlin, Java, Go, Rust, or Python)
  • Experience designing and implementing highly available, fault-tolerant, and cost-efficient storage architectures
  • Deep understanding of distributed systems, replication strategies, sharding, and caching
  • Knowledge of data security, encryption best practices, and compliance requirements (SOC2, GDPR, HIPAA)
  • Experience leading engineering teams, mentoring senior engineers, and driving technical roadmaps
  • Proficiency with observability tools, performance monitoring, and troubleshooting at scale
Job Responsibility
Job Responsibility
  • Designing and optimizing high-scale, distributed storage systems built on AWS storage technologies
  • Shaping the architecture, performance, and reliability of backend storage solutions that power critical applications at scale
  • Designing, implementing, and optimizing backend storage services that support high throughput, low latency, and fault tolerance
  • Working closely with senior engineers, architects, and cross-functional teams to drive scalability, availability, and efficiency improvements in large-scale storage solutions
  • Leading technical deep dives, architecture reviews, and root cause analyses to resolve complex production issues related to storage performance, consistency, and durability
  • Driving best practices in distributed system design, security, and cloud cost optimization
  • Mentoring senior engineers, contributing to technical roadmaps, and helping shape the long-term storage strategy
  • Collaborating with Site Reliability Engineers (SREs) to implement observability, monitoring, and disaster recovery strategies, ensuring high availability and compliance with industry standards
  • Advocating for automation, Infrastructure-as-Code (IaC), and DevOps best practices, leveraging tools like Terraform, AWS CloudFormation, Kubernetes (EKS), and CI/CD pipelines to enable scalable deployments and operational excellence
What we offer
What we offer
  • Atlassians can choose where they work – whether in an office, from home, or a combination of the two
  • Atlassians have more control over supporting their family, personal goals, and other priorities
  • We can hire people in any country where we have a legal entity
  • Interviews and onboarding are conducted virtually
  • Whatever your preference - working from home, an office, or in between - you can choose the place that's best for your work and your lifestyle
Read More
Arrow Right

Senior Data Engineer

Inetum Polska is part of the global Inetum Group and plays a key role in driving...
Location
Location
Poland , Warsaw
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Expert-level proficiency in Databricks, Apache Spark, SQL, and Python/Scala
  • Extensive experience with cloud data platforms (AWS, Azure, or GCP) and big data technologies
  • Strong understanding of data architecture, data warehousing, and lakehouse concepts
  • Experience with real-time data processing and streaming technologies (Kafka, Delta Live Tables, Event Hubs)
  • Proficiency in automation, CI/CD, and Infrastructure as Code (Terraform, Bicep)
  • Leadership skills with the ability to drive strategic technical decisions
  • 6+ years of experience in data engineering, with a track record of designing and implementing complex data solutions
Job Responsibility
Job Responsibility
  • Architect and implement enterprise-grade data solutions using Databricks, Apache Spark, and cloud services
  • Lead data engineering initiatives, setting best practices and guiding technical decisions
  • Design, optimize, and scale data pipelines for performance, reliability, and cost efficiency
  • Define data governance policies and implement security best practices
  • Evaluate emerging technologies and recommend improvements to existing data infrastructure
  • Mentor junior and mid-level engineers, fostering a culture of continuous learning
  • Collaborate with cross-functional teams to align data strategy with business objectives
What we offer
What we offer
  • Flexible working hours
  • Hybrid work model
  • Cafeteria system
  • Generous referral bonuses
  • Additional revenue sharing opportunities
  • Ongoing guidance from a dedicated Team Manager
  • Tailored technical mentoring
  • Dedicated team-building budget
  • Opportunities to participate in charitable initiatives and local sports programs
  • Supportive and inclusive work culture
  • Fulltime
Read More
Arrow Right

Mid Software Engineer

Our Mid Software Engineers don’t shy away from tackling the greatest of obstacle...
Location
Location
Philippines , Makati City
Salary
Salary:
Not provided
lawadvisor.ventures Logo
LawAdvisor Ventures Ltd.
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2+ years of experience
  • Extensive professional development, documentation, and maintenance experience across the full software development lifecycle
  • Strong written, verbal and interpersonal skills
  • The ability to steer the development of our team members and continuously bring forward new ideas
  • The ability to learn and become proficient in other programming languages in a short span of time
  • Experience working with HTML, CSS, and JavaScript
  • Experience working with relational databases and efficient data design and access
  • Experience with version control systems (Git)
Job Responsibility
Job Responsibility
  • Work alongside a team of talented engineers, designers, product managers and quality assurance specialists to build and deploy exciting features and beautiful products
  • Know your fundamentals: you’re able to not just understand all the phases of the software development lifecycle but contribute to it to keep moving it forward
  • Uphold and refine standards relating to the quality, design, performance, and security of code across the engineering team
  • Design and create: you write well-thought, efficient, production-ready code
  • Become a leader in our product development team by partaking in solution design meetings, hiring interviews, and code reviews
  • Propose and contribute to new approaches and solutions to ensure we future-proof LawAdvisor’s infrastructure as we continue to scale globally
What we offer
What we offer
  • A highly skilled, driven, and dedicated team
  • Remote work opportunities
  • Continuous learning and development
  • Company gatherings
  • A direct line with our key users, and influential high-level stakeholders
Read More
Arrow Right

Mid Software Engineer

Flanks is shaking up the wealth management industry by making it simpler and way...
Location
Location
Spain , Barcelona
Salary
Salary:
36000.00 - 50000.00 EUR / Year
flanks.io Logo
Flanks
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • You are a nice person
  • Autonomous coder
  • You know your way around collaborating with others using standard tooling (git, github PR's, etc)
  • You are a good communicator who knows how to express problems, solutions and trade-offs
  • You know how to read and understand job offers
  • Focused
  • You can work with both legacy and greenfield code
  • You take ownership of problems
  • You live in Barcelona City and/or close enough to come to the office a few times a month
  • Living in Spain is a mandatory and non-negotiable requirement
Job Responsibility
Job Responsibility
  • Build seamless user interfaces for secure credential storage
  • Handle sensitive financial data with performance, compliance, and traceability in mind
  • Scale our ingestion system to fetch more data, faster
  • Mentor and grow the team, ensuring alignment and consistency as we expand
  • Coding, collaborating, and delivering impactful solutions
  • owning it beyond deployment
What we offer
What we offer
  • A cool office between Sants Estació and Plaça Espanya with stunning views of Barcelona
  • Flexible working hours and hybrid work options
  • Paid day off on your birthday
  • Weekly fresh fruit, coffee, and tea on tap
  • Friday happy hours after our all-hands meetings
  • Team-building events to bond and have fun
  • Health insurance and flexible compensation with Alan
  • A digital canteen, thanks to Nora Real Food, subsidised at 50%
  • A yearly training budget to keep growing
  • Fulltime
Read More
Arrow Right