CrawlJobs Logo

Senior Security Engineer – Cloud & Data Security

sigmacomputing.com Logo

Sigma Computing

Location Icon

Location:
United States , New York

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

210000.00 - 240000.00 USD / Year

Job Description:

We are hiring a Senior, hands-on Cloud Security Engineer to secure a large-scale, cloud-native SaaS platform. This is an engineering-first role for someone who builds security solutions—not just manages tools. You will be a SME for cloud security architecture across platform, IAM, network, workload, data, and AI enablement, and partner with Engineering, Security, and Product to implement scalable controls that support business growth. You’ll design secure architectures, embed controls into infrastructure-as-code, and build automated guardrails so teams can move fast without waiting on manual security approvals.

Job Responsibility:

  • Architectural Leadership: Partner deeply with infrastructure and engineering teams to embed security into development workflows, leading high-level technical discussions to guide security efforts and strategic priorities
  • Multi-Cloud Engineering: Design, implement, and continuously improve Sigma Cloud Security across AWS, GCP, and Azure environments with architect-level technical depth
  • Threat Modeling & IR: Conduct cloud threat modeling and demonstrate hands-on experience in Cloud Incident Response, including investigating and remediating malicious activity within cloud environments
  • Identity & Access: Build IAM and privileged access strategy (RBAC/ABAC, federation, least privilege, cross-account access), eliminating standing privilege and long-lived credentials. Develop and enforce IAM best practices, including zero-trust models and privileged access controls across IaaS and SaaS
  • Drive cloud data security controls including classification, encryption/KMS, masking/tokenization, access governance, retention/deletion, and exfiltration risk reduction across APIs and data pipelines
  • Develop automated remediation workflows for recurring cloud misconfigurations, drift, and policy violations to reduce manual effort and response time
  • Security Stack Management: Deploy and manage cloud-native services (CSPM, CNAPP, DSPM, SIEM, DLP, WAF, Kubernetes, and container security)
  • Network Defense: Review and apply zero-trust principles through strict network segmentation, authentication, and authorization
  • Automation: Develop sophisticated signatures/rules for cloud security and automate detection and response workflows
  • AI : Use AI securely and effectively to scale security practices and improve team efficiency
  • Continuous Evolution: Stay ahead of threats by leveraging intelligence, attack simulation, and red/blue team learnings

Requirements:

  • Minimum 7+ years in Security roles with at least 5+ years focused on Cloud security engineering,IAM, and Data security
  • Bachelor’s or Master’s degree in Computer Science, Cyber Security, or a related field
  • Deep technical expertise in cloud architectures AWS/Azure/GCP
  • including IAM, networking (VPCs, security groups, PrivateLink), and native security services is strongly desired
  • Strong infrastructure-as-code skills—you write Terraform professionally, not just read it
  • Advanced understanding and experience with container security, Kubernetes, and secure CI/CD pipeline design
  • Proven ability to demonstrate incident response experience specifically related to cloud-based malicious activity and breach remediation
  • Advanced Cloud IAM expertise: federation, SSO, PAM/JIT access, service identities, and least privilege design
  • Strong background in cloud network security (segmentation, private connectivity, egress controls, WAF)
  • Strong proficiency in scripting languages (e.g., Python, Go, PowerShell) for automation, data analysis, and security tooling development
  • Strong knowledge of security platforms such as CNAPP (Wiz), WAF (Cloudflare), SASE (Netskope)
  • Demonstrated ability to lead cloud/saas architecture reviews and influence senior engineering stakeholders

Nice to have:

  • Experience securing data platforms (nice to have) - Snowflake, Databricks, BigQuery etc.
  • Experience in high-growth SaaS or data platforms Organizations (nice-to have)
  • Prior experience in Platform Engineering, DevSecops or similar (nice-to have)
  • Certifications (Preferred): Professional-level cloud certifications are required, such as: AWS: Certified Security – Specialty or Solutions Architect – Professional
  • GCP: Professional Cloud Security Engineer or Professional Cloud Architect
  • Azure: AZ-500 (Security Technologies) or AZ-305 (Solutions Architect)
What we offer:
  • Equity
  • Generous health benefits
  • Flexible time off policy
  • Paid bonding time for all new parents
  • Traditional and Roth 401k
  • Commuter and FSA benefits
  • Lunch Program
  • Dog friendly office

Additional Information:

Job Posted:
February 20, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Security Engineer – Cloud & Data Security

Senior Data Engineer

We are looking for a Senior Data Engineer (SDE 3) to build scalable, high-perfor...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
https://cogoport.com/ Logo
Cogoport
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in data engineering, working with large-scale distributed systems
  • Strong proficiency in Python, Java, or Scala for data processing
  • Expertise in SQL and NoSQL databases (PostgreSQL, Cassandra, Snowflake, Apache Hive, Redshift)
  • Experience with big data processing frameworks (Apache Spark, Flink, Hadoop)
  • Hands-on experience with real-time data streaming (Kafka, Kinesis, Pulsar) for logistics use cases
  • Deep knowledge of AWS/GCP/Azure cloud data services like S3, Glue, EMR, Databricks, or equivalent
  • Familiarity with Airflow, Prefect, or Dagster for workflow orchestration
  • Strong understanding of logistics and supply chain data structures, including freight pricing models, carrier APIs, and shipment tracking systems
Job Responsibility
Job Responsibility
  • Design and develop real-time and batch ETL/ELT pipelines for structured and unstructured logistics data (freight rates, shipping schedules, tracking events, etc.)
  • Optimize data ingestion, transformation, and storage for high availability and cost efficiency
  • Ensure seamless integration of data from global trade platforms, carrier APIs, and operational databases
  • Architect scalable, cloud-native data platforms using AWS (S3, Glue, EMR, Redshift), GCP (BigQuery, Dataflow), or Azure
  • Build and manage data lakes, warehouses, and real-time processing frameworks to support analytics, machine learning, and reporting needs
  • Optimize distributed databases (Snowflake, Redshift, BigQuery, Apache Hive) for logistics analytics
  • Develop streaming data solutions using Apache Kafka, Pulsar, or Kinesis to power real-time shipment tracking, anomaly detection, and dynamic pricing
  • Enable AI-driven freight rate predictions, demand forecasting, and shipment delay analytics
  • Improve customer experience by providing real-time visibility into supply chain disruptions and delivery timeline
  • Ensure high availability, fault tolerance, and data security compliance (GDPR, CCPA) across the platform
What we offer
What we offer
  • Work with some of the brightest minds in the industry
  • Entrepreneurial culture fostering innovation, impact, and career growth
  • Opportunity to work on real-world logistics challenges
  • Collaborate with cross-functional teams across data science, engineering, and product
  • Be part of a fast-growing company scaling next-gen logistics platforms using advanced data engineering and AI
  • Fulltime
Read More
Arrow Right

Senior Security GRC Engineer

The Senior Security GRC Engineer at Atlassian will be instrumental in implementi...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5-7+ years experience in a similar role, preferably in a large-scale SaaS/Product environment
  • Expertise and experience working in security-focused roles
  • Experience with application security, especially web applications
  • Experience in cloud security architecture and infrastructure
  • Experience providing SME knowledge and guidance to stakeholders and engineering functions
  • Experience working with internal/external audit and leadership teams
  • Solid knowledge of cybersecurity principles, risk management strategies, and IT governance frameworks
  • Strong communication and interpersonal skills, with the ability to interact with stakeholders at all levels and explain complex security concepts in an understandable way
  • Relevant certifications such as CISSP, CISM, or CRISC would be beneficial
  • Scripting experience to automate recurring tasks (JQL, SQL, Python, Go)
Job Responsibility
Job Responsibility
  • Deliver technical expertise and innovation, providing security guidance to teams and promoting the adoption of industry-leading methodologies to build secure products by default
  • Drive technical solutions in security and risk management
  • Leverage data analytics and visualization, deriving actionable insights from security governance, risk, and compliance data
  • Promote automation and tooling, encouraging the use of the latest security tools to enhance product security processes
  • Proactively identify and mitigate risks, recognizing potential security threats or compliance concerns specific to product security
  • Collaborate with product security teams, implementing security controls and best practices
  • Regularly evaluate and report, assessing the effectiveness of security controls
  • Influence and align stakeholders, working with security engineers and stakeholders to drive alignment on security initiatives
  • Stay informed on regulatory awareness and compliance, keeping up with the latest developments in legislative, regulatory, and industry security requirements
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right

Principal Software Engineer – Cloud Security

Principal Software Engineer – Cloud Security role at Hewlett Packard Enterprise,...
Location
Location
Israel , Tel Aviv
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's or master’s degree in computer science, engineering, information systems, or closely related quantitative discipline
  • Typically, 10-15 years’ experience
  • Deep expertise in software systems design, development methodologies, and integration across diverse platforms and technologies
  • Strong business acumen, focusing on aligning technological initiatives with business goals and driving sustainable growth and profitability
  • Exceptional analytical and problem-solving skills, with the ability to navigate complex technical challenges and drive impactful solutions
  • Track record of driving technological innovation, with a portfolio of patents and successful product deployments
  • Exceptional communication and stakeholder management skills, with the ability to effectively convey complex technical concepts to non-technical audiences and influence decision-making at the executive level
Job Responsibility
Job Responsibility
  • Leads the identification, evaluation, and adoption of cutting-edge technologies, innovations, and strategic partnerships to drive growth and competitiveness
  • Drives developing and implementing robust methodologies, standards, and best practices for software systems design, development, and integration
  • Leverages recognized domain expertise and experience to influence decisions
  • Collaborates with executive leadership to align technology initiatives with business objectives, ensuring technology investments deliver measurable value and impact
  • Champion a culture of continuous innovation, thought leadership, and excellence in software systems design and help build technical community
  • Provides strategic guidance and mentorship to senior technical teams, fostering a culture of collaboration, creativity, and high-performance outcomes
  • Analyzes science, engineering, business, and other data processing problems to develop and implement solutions to complex application problems, system administration issues, or network concerns
What we offer
What we offer
  • Health & Wellbeing
  • Personal & Professional Development
  • Unconditional Inclusion
  • Fulltime
Read More
Arrow Right

Senior Cloud Data Architect

As a Senior Cloud Architect, your role will focus on supporting users, collabora...
Location
Location
Spain , Barcelona
Salary
Salary:
Not provided
https://www.allianz.com Logo
Allianz
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong expertise in Azure cloud infrastructure, Data & AI technologies, and data platform management, with proficiency in Azure Synapse Analytics, Azure Machine Learning, Azure Data Lake, and Informatica Intelligent Data Management Cloud (IDMC)
  • Proven experience in modern Data Warehouse architectures (e.g., Lakehouse) and integrating machine learning models and AI capabilities using Azure services like Cognitive Services and Azure Bot Service for predictive analytics and automation
  • In-depth knowledge of data security and compliance practices using Azure AD, Azure Key Vault, and Informatica’s data governance tools, focusing on data privacy and regulatory standards
  • Expertise in optimizing resource usage, performance, and costs across Azure services and IDMC, leveraging tools like Azure Cost Management and Azure Monitor, and skilled in ETL/ELT tools and advanced SQL
  • Proficiency in data integration, machine learning, and generative AI from an architectural perspective, with hands-on experience in Python, SQL, Spark/Scala/PySpark, and container solutions like Docker and Kubernetes
  • Experience with CI/CD pipelines (e.g., GitHub Actions, Jenkins), microservices architectures, and APIs, with knowledge of architecture frameworks like TOGAF or Zachman, adept at managing multiple priorities in fast-paced environments, and excellent communication and presentation skills
  • Over 5 years of experience in cloud architecture focusing on Data & AI infrastructure, particularly in Azure, with expertise in building scalable, secure, and cost-effective solutions for data analytics and AI/ML environments.
Job Responsibility
Job Responsibility
  • Define and prioritize new functional and non-functional capabilities for the cloud-based data platform, ensuring alignment with business needs and Allianz's security, compliance, privacy, and architecture standards
  • Act as the platform SME for both potential and existing users, guiding them in the architecture of scalable, high-performance Data & AI solutions
  • Provide leadership and product guidance to engineering teams during the design, development, and implementation of new platform capabilities
  • Ensure all solutions meet defined quality standards and acceptance criteria
  • Work with stakeholders to co-create data solutions, optimizing business models and identifying opportunities for improved data usage
  • Lead the evaluation and selection of technologies and partners to implement data analytics use cases, focusing on proofs of concept and prototypes
  • Stay up to date with emerging trends in Data, Analytics, AI/ML, and cloud technologies
  • Leverage open-source technologies and cloud tools to drive innovation and cost-efficiency
  • Prepare materials for management briefings and public events
  • Represent the team in technical discussions, particularly regarding architecture and platform capabilities.
What we offer
What we offer
  • Hybrid work model which recognizes the value of striking a balance between in-person collaboration and remote working incl. up to 25 days per year working from abroad
  • Rewarding performance through company bonus scheme, pension, employee shares program, and multiple employee discounts
  • Career development and digital learning programs to international career mobility
  • Flexible working, health and wellbeing offers (including healthcare and parental leave benefits)
  • Support for balancing family and career and helping employees return from career breaks with experience that nothing else can teach.
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a highly skilled Senior Data Engineer to join our team on a l...
Location
Location
United States , Dallas
Salary
Salary:
Not provided
https://www.roberthalf.com Logo
Robert Half
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Engineering, or a related discipline
  • At least 7 years of experience in data engineering
  • Strong background in designing and managing data pipelines
  • Proficiency in tools such as Apache Kafka, Airflow, NiFi, Databricks, Spark, Hadoop, Flink, and Amazon S3
  • Expertise in programming languages like Python, Scala, or Java for data processing and automation
  • Strong knowledge of both relational and NoSQL databases
  • Experience with Kubernetes-based data engineering and hybrid cloud environments
  • Familiarity with data modeling principles, governance frameworks, and quality assurance processes
  • Excellent problem-solving, analytical, and communication skills
Job Responsibility
Job Responsibility
  • Design and implement robust data pipelines and architectures to support data-driven decision-making
  • Develop and maintain scalable data pipelines using tools like Apache Airflow, NiFi, and Databricks
  • Implement and manage real-time data streaming solutions utilizing Apache Kafka and Flink
  • Optimize and oversee data storage systems with technologies such as Hadoop and Amazon S3
  • Establish and enforce data governance, quality, and security protocols
  • Manage complex workflows and processes across hybrid and multi-cloud environments
  • Work with diverse data formats, including Parquet and Avro
  • Troubleshoot and fine-tune distributed data systems
  • Mentor and guide engineers at the beginning of their careers
What we offer
What we offer
  • Medical, vision, dental, and life and disability insurance
  • 401(k) plan
  • Free online training
  • Fulltime
Read More
Arrow Right

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • In-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • Expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • Advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • Advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • Solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • Expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake to ensure data quality, consistency, and historical tracking
  • Efficient implementation of the Lakehouse architecture on Databricks, combining best practices from DWH and Data Lake
  • Optimize Databricks clusters, Spark operations, and Delta tables to reduce latency and computational costs
  • Design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables
  • Implement and manage Unity Catalog for centralized data governance, data security and data lineage
  • Define and implement data quality standards and rules to maintain data integrity
  • Develop and manage complex workflows using Databricks Workflows or external tools to automate pipelines
  • Integrate Databricks pipelines into CI/CD processes
  • Work closely with Data Scientists, Analysts, and Architects to understand business requirements and deliver optimal technical solutions
What we offer
What we offer
  • Full access to foreign language learning platform
  • Personalized access to tech learning platforms
  • Tailored workshops and trainings to sustain your growth
  • Medical insurance
  • Meal tickets
  • Monthly budget to allocate on flexible benefit platform
  • Access to 7 Card services
  • Wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We’re hiring a Senior Data Engineer with strong experience in AWS and Databricks...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
appen.com Logo
Appen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5-7 years of hands-on experience with AWS data engineering technologies, such as Amazon Redshift, AWS Glue, AWS Data Pipeline, Amazon Kinesis, Amazon RDS, and Apache Airflow
  • Hands-on experience working with Databricks, including Delta Lake, Apache Spark (Python or Scala), and Unity Catalog
  • Demonstrated proficiency in SQL and NoSQL databases, ETL tools, and data pipeline workflows
  • Experience with Python, and/or Java
  • Deep understanding of data structures, data modeling, and software architecture
  • Strong problem-solving skills and attention to detail
  • Self-motivated and able to work independently, with excellent organizational and multitasking skills
  • Exceptional communication skills, with the ability to explain complex data concepts to non-technical stakeholders
  • Bachelor's Degree in Computer Science, Information Systems, or a related field. A Master's Degree is preferred.
Job Responsibility
Job Responsibility
  • Design, build, and manage large-scale data infrastructures using a variety of AWS technologies such as Amazon Redshift, AWS Glue, Amazon Athena, AWS Data Pipeline, Amazon Kinesis, Amazon EMR, and Amazon RDS
  • Design, develop, and maintain scalable data pipelines and architectures on Databricks using tools such as Delta Lake, Unity Catalog, and Apache Spark (Python or Scala), or similar technologies
  • Integrate Databricks with cloud platforms like AWS to ensure smooth and secure data flow across systems
  • Build and automate CI/CD pipelines for deploying, testing, and monitoring Databricks workflows and data jobs
  • Continuously optimize data workflows for performance, reliability, and security, applying Databricks best practices around data governance and quality
  • Ensure the performance, availability, and security of datasets across the organization, utilizing AWS’s robust suite of tools for data management
  • Collaborate with data scientists, software engineers, product managers, and other key stakeholders to develop data-driven solutions and models
  • Translate complex functional and technical requirements into detailed design proposals and implement them
  • Mentor junior and mid-level data engineers, fostering a culture of continuous learning and improvement within the team
  • Identify, troubleshoot, and resolve complex data-related issues
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

At Blue Margin, we are on a mission to build the go-to data platform for PE-back...
Location
Location
United States , Fort Collins
Salary
Salary:
110000.00 - 140000.00 USD / Year
bluemargin.com Logo
Blue Margin
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
  • 5+ years of professional experience in data engineering, with emphasis on Python & PySpark/Apache Spark
  • Proven ability to manage large datasets and optimize for speed, scalability, and reliability
  • Strong SQL skills and understanding of relational and distributed data systems
  • Experience with Azure Data Factory, Synapse Pipelines, Fivetran, Delta Lake, Microsoft Fabric, or Snowflake
  • Knowledge of data modeling, orchestration, and Delta/Parquet file management best practices
  • Familiarity with CI/CD, version control, and DevOps practices for data pipelines
  • Experience leveraging AI-assisted tools to accelerate engineering workflows
  • Strong communication skills
  • ability to convey complex technical details to both engineers and business stakeholders
Job Responsibility
Job Responsibility
  • Architect, design, and optimize large-scale data pipelines using tools like PySpark, SparkSQL, Delta Lake, and cloud-native tools
  • Drive efficiency in incremental/delta data loading, partitioning, and performance tuning
  • Lead implementations across Azure Synapse, Microsoft Fabric, and/or Snowflake environments
  • Collaborate with stakeholders and analysts to translate business needs into scalable data solutions
  • Evaluate and incorporate AI/automation to improve development speed, testing, and data quality
  • Oversee and mentor junior data engineers, establishing coding standards and best practices
  • Ensure high standards for data quality, security, and governance
  • Participate in solution design for client engagements, balancing technical depth with practical outcomes
What we offer
What we offer
  • Competitive pay
  • strong benefits
  • flexible hybrid work setup
  • Fulltime
Read More
Arrow Right