CrawlJobs Logo

Delta Engineer

cognition-labs.com Logo

Cognition Labs

Location Icon

Location:
United States , San Francisco Bay Area

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Delta Engineers bring technical depth to Cognition’s most complex enterprise engagements. You'll own the technical direction of a focused set of high-impact accounts, tackling the hardest problems that shape long-term adoption, architecture, and how the Cognition platform evolves. This means architecting deployments, building integrations, and debugging what no one else can figure out. A core part of this role is shaping the product. You'll identify patterns from the field, build on the product edge to unblock customers, and translate those learnings into requirements that inform what core engineering builds next. When customers hit friction that the product doesn’t solve, you’re the person who figures out what should exist—and makes it happen.

Job Responsibility:

  • Operate across strategic accounts, providing technical depth on complex engagements
  • Own the technical direction and success of customer accounts and deployments
  • Translate customer pain points into product direction: scope requirements, define solutions, and drive implementation
  • Build product extensions and integrations using Cognition’s APIs and platform primitives
  • Architect and maintain complex enterprise deployments
  • Handle escalated technical issues requiring deep debugging or architectural judgment
  • Be a credible technical voice to engineering on what the field needs

Requirements:

  • Strong engineering foundation
  • Have operated cross-functionally—whether at a startup wearing multiple hats or by driving product/commercial thinking at a larger company
  • Have a track record of pursuing excellence at high levels, in any domain
  • Proficiency in Python, TypeScript, or similar
  • comfort navigating unfamiliar codebases
  • Experience scoping and shipping features in collaboration with product and engineering teams

Nice to have:

  • Have built software at companies known for engineering rigor and want broader impact
  • Track record in forward-deployed engineering, solutions architecture, or similar technical roles
  • Experience with enterprise infrastructure (cloud deployments, networking, security)
  • Want to stay close to customers while shaping how the product evolves
  • Enjoy hard problems without obvious solutions and the autonomy to figure them out
  • Have a high tolerance for ambiguity, intensity, and sustained effort when the problem demands it

Additional Information:

Job Posted:
January 10, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Delta Engineer

Databricks Engineer

We are seeking a Databricks Engineer to design, build, and operate a Data & AI p...
Location
Location
United States , Leesburg
Salary
Salary:
Not provided
wintrio.com Logo
WINTrio
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Hands-on experience with Databricks, Delta Lake, and Apache Spark
  • Deep understanding of ELT pipeline development, orchestration, and monitoring in cloud-native environments
  • Experience implementing Medallion Architecture (Bronze/Silver/Gold) and working with data versioning and schema enforcement in enterprise grade environments
  • Strong proficiency in SQL, Python, or Scala for data transformations and workflow logic
  • Proven experience integrating enterprise platforms (e.g., PeopleSoft, Salesforce, D2L) into centralized data platforms
  • Familiarity with data governance, lineage tracking, and metadata management tools
Job Responsibility
Job Responsibility
  • Data & AI Platform Engineering (Databricks-Centric): Design, implement, and optimize end-to-end data pipelines on Databricks, following the Medallion Architecture principles
  • Build robust and scalable ETL/ELT pipelines using Apache Spark and Delta Lake to transform raw (bronze) data into trusted curated (silver) and analytics-ready (gold) data layers
  • Operationalize Databricks Workflows for orchestration, dependency management, and pipeline automation
  • Apply schema evolution and data versioning to support agile data development
  • Platform Integration & Data Ingestion: Connect and ingest data from enterprise systems such as PeopleSoft, D2L, and Salesforce using APIs, JDBC, or other integration frameworks
  • Implement connectors and ingestion frameworks that accommodate structured, semi-structured, and unstructured data
  • Design standardized data ingestion processes with automated error handling, retries, and alerting
  • Data Quality, Monitoring, and Governance: Develop data quality checks, validation rules, and anomaly detection mechanisms to ensure data integrity across all layers
  • Integrate monitoring and observability tools (e.g., Databricks metrics, Grafana) to track ETL performance, latency, and failures
  • Implement Unity Catalog or equivalent tools for centralized metadata management, data lineage, and governance policy enforcement
Read More
Arrow Right

Backend Data Engineer

The mission of the Data & Analytics (D&A) team is to enable data users to easily...
Location
Location
United States , Cincinnati
Salary
Salary:
Not provided
honorvettech.com Logo
HonorVet Technologies
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong proficiency in Databricks (SQL, PySpark, Delta Lake, Jobs/Workflows)
  • Deep knowledge of Unity Catalog administration and APIs
  • Expertise in Python for automation scripts, API integrations, and data quality checks
  • Experience with governance frameworks (access control, tagging enforcement, lineage, compliance)
  • Solid foundation in security & compliance best practices (IAM, encryption, PII)
  • Experience with CI/CD and deployment pipelines (GitHub Actions, Azure DevOps, Jenkins)
  • Familiarity with monitoring/observability tools and building custom logging & alerting pipelines
  • Experience integrating with external systems (ServiceNow, monitoring platforms)
  • Experience with modern data quality frameworks (Great Expectations, Deequ, or equivalent)
  • Strong problem-solving and debugging skills in distributed systems
Job Responsibility
Job Responsibility
  • Databricks & Unity Catalog Engineering: Build and maintain backend services leveraging Databricks (SQL, PySpark, Delta Lake, Jobs/Workflows)
  • Administer Unity Catalog including metadata, permissions, lineage, and tags
  • Integrate Unity Catalog APIs to surface data into the Metadata Catalog UI
  • Governance Automation: Develop automation scripts and pipelines to enforce access controls, tagging, and role-based policies
  • Implement governance workflows integrating with tools such as ServiceNow for request and approval processes
  • Automate compliance checks for regulatory and security requirements (IAM, PII handling, encryption)
  • Data Quality & Observability: Implement data quality frameworks (Great Expectations, Deequ, or equivalent) to validate datasets
  • Build monitoring and observability pipelines for logging, usage metrics, audit trails, and alerts
  • Ensure high system reliability and proactive issue detection
  • API Development & Integration: Design and implement APIs to integrate Databricks services with external platforms (ServiceNow, monitoring tools)
Read More
Arrow Right

Senior Data Engineer

Location
Location
Salary
Salary:
Not provided
kloud9.nyc Logo
Kloud9
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in developing scalable Big Data applications or solutions on distributed platforms
  • 4+ years of experience working with distributed technology tools, including Spark, Python, Scala
  • Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture
  • Proficient in working on Amazon Web Services(AWS) mainly S3, Managed Airflow, EMR/ EC2, IAM etc.
  • Experience working in Agile and Scrum development process
  • 3+ years of experience in Amazon Web Services (AWS) mainly S3, Managed Airflow, EMR/ EC2, IAM etc.
  • Experience architecting data product in Streaming, Serverless and Microservices Architecture and platform
  • 3+ years of experience working with Data platforms, including EMR, Airflow, Databricks (Data Engineering & Delta)
  • Experience with creating/configuring Jenkins pipeline for smooth CI/CD process for Managed Spark jobs, build Docker images, etc.
  • Working knowledge of Reporting & Analytical tools such as Tableau, Quicksite etc.
Job Responsibility
Job Responsibility
  • Design and develop scalable Big Data applications on distributed platforms to support large-scale data processing and analytics needs
  • Partner with others in solving complex problems by taking a broad perspective to identify innovative solutions
  • Build positive relationships across Product and Engineering
  • Influence and communicate effectively, both verbally and written, with team members and business stakeholders
  • Quickly pick up new programming languages, technologies, and frameworks
  • Collaborate effectively in a high-speed, results-driven work environment to meet project deadlines and business goals
  • Utilize Data Warehousing tools such as SQL databases, Presto, and Snowflake for efficient data storage, querying, and analysis
  • Demonstrate experience in learning new technologies and skills.
What we offer
What we offer
  • Kloud9 provides a robust compensation package and a forward-looking opportunity for growth in emerging fields.
Read More
Arrow Right
New

Senior Analyst - Data Engineer

Collaborate with data scientists and business stakeholders to design, develop, a...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
pumaenergy.com Logo
Puma Energy
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5 years of overall experience & at least 3 years of relevant experience
  • 3 years of experience working with Azure or any cloud platform & Databricks
  • Proficiency in Spark, Delta Lake, Structured Streaming, and other Azure Databricks functionalities for sophisticated data pipeline construction
  • Strong capability in diagnosing and optimizing Spark applications and Databricks workloads, including strategic cluster sizing and configuration
  • Expertise in sharing data solutions that leverage Azure Databricks ecosystem technologies for enhanced data management and processing efficiency
  • Profound knowledge of data governance, data security, coupled with an understanding of large-scale distributed systems and cloud architecture design
  • Experience with a variety of data sources and BI tools
  • Experience with CI/CD and DevOps practices specifically tailored for the Databricks environment
Job Responsibility
Job Responsibility
  • Contribute to the development of scalable and performant data pipelines on Databricks, leveraging Delta Lake, Delta Live Tables (DLT), and other core Databricks components
  • Develop data lakes/warehouses designed for optimized storage, querying, and real-time updates using Delta Lake
  • Implement effective data ingestion strategies from various sources (streaming, batch, API-based), ensuring seamless integration with Databricks
  • Ensure the integrity, security, quality, and governance of data across our Databricks-centric platforms
  • Collaborate with stakeholders (data scientists, analysts, product teams) to translate business requirements into Databricks-native data solutions
  • Build and maintain ETL/ELT processes, heavily utilizing Databricks, Spark (Scala or Python), SQL, and Delta Lake for transformations
  • Monitor and optimize the cost-efficiency of data operations on Databricks, ensuring optimal resource utilization
  • Utilize a range of Databricks tools, including the Databricks CLI and REST API, alongside Apache Spark™, to develop, manage, and optimize data engineering solutions
  • Fulltime
Read More
Arrow Right
New

Data Engineer

Skill Set : Data engineering Total Experience : 6.00 to 8.00 Years No of Openi...
Location
Location
United States , WILMINGTON
Salary
Salary:
110000.00 - 120000.00 USD / Year
techmahindra.com Logo
Tech Mahindra
Expiration Date
January 31, 2026
Flip Icon
Requirements
Requirements
  • Bachelor’s or Higher Degree
  • 6.00 to 8.00 Years of total experience
  • Data engineering skill set
  • Write Pyspark code with large datasets
  • Tune spark jobs and ensure performance and reliability in distributed environments
  • Good Analytical skills
  • Know how to build, test, and troubleshoot data pipelines and deploy them into higher environments
Job Responsibility
Job Responsibility
  • ETL/ELT pipelines using Databricks, PySpark, Delta Lake, and AWS-native services
What we offer
What we offer
  • medical
  • vision
  • dental
  • life
  • disability insurance
  • paid time off (including holidays, parental leave, and sick leave)
  • Fulltime
Read More
Arrow Right

Lead Data Engineer

As a Lead Data Engineer at Rearc, you'll play a pivotal role in establishing and...
Location
Location
United States
Salary
Salary:
Not provided
rearc.io Logo
Rearc
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10+ years of experience in data engineering, data architecture, or related technical fields
  • Proven ability to design, build, and optimize large-scale data ecosystems
  • Strong track record of leading complex data engineering initiatives
  • Deep hands-on expertise in ETL/ELT design, data warehousing, and data modeling
  • Extensive experience with data integration frameworks and best practices
  • Advanced knowledge of cloud-based data services and architectures (AWS Redshift, Azure Synapse Analytics, Google BigQuery, or equivalent)
  • Strong strategic and analytical thinking
  • Proficiency with modern data engineering frameworks (Databricks, Spark, lakehouse technologies like Delta Lake)
  • Exceptional communication and interpersonal skills
Job Responsibility
Job Responsibility
  • Engage deeply with stakeholders to understand data needs, business challenges, and technical constraints
  • Translate stakeholder needs into scalable, high-quality data solutions
  • Implement with a DataOps mindset using tools like Apache Airflow, Databricks/Spark, Kafka
  • Build reliable, automated, and efficient data pipelines and architectures
  • Lead and execute complex projects
  • Provide technical direction and set engineering standards
  • Ensure alignment with customer goals and company principles
  • Mentor and develop data engineers
  • Promote knowledge sharing and thought leadership
  • Contribute to internal and external content
What we offer
What we offer
  • Comprehensive health benefits
  • Generous time away and flexible PTO
  • Maternity and paternity leave
  • Access to educational resources with reimbursement for continued learning
  • 401(k) plan with company contribution
Read More
Arrow Right

Senior Data Engineer

As a Senior Data Engineer at Rearc, you'll play a pivotal role in establishing a...
Location
Location
United States , New York
Salary
Salary:
160000.00 - 200000.00 USD / Year
rearc.io Logo
Rearc
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of professional experience in data engineering across modern cloud architectures and diverse data systems
  • Expertise in designing and implementing data warehouses and data lakes across modern cloud environments (e.g., AWS, Azure, or GCP), with experience in technologies such as Redshift, BigQuery, Snowflake, Delta Lake, or Iceberg
  • Strong Python experience for data engineering, including libraries like Pandas, PySpark, NumPy, or Dask
  • Hands-on experience with Spark and Databricks (highly desirable)
  • Experience building and orchestrating data pipelines using Airflow, Databricks, DBT, or AWS Glue
  • Strong SQL skills and experience with both SQL and NoSQL databases (PostgreSQL, DynamoDB, Redshift, Delta Lake, Iceberg)
  • Solid understanding of data architecture principles, data modeling, and best practices for scalable data systems
  • Experience with cloud provider services (AWS, Azure, or GCP) and comfort using command-line interfaces or SDKs as part of development workflows
  • Familiarity with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, ARM/Bicep, or AWS CDK
  • Excellent communication skills, able to explain technical concepts to technical and non-technical stakeholders
Job Responsibility
Job Responsibility
  • Provide strategic data engineering leadership by shaping the vision, roadmap, and technical direction of data initiatives to align with business goals
  • Architect and build scalable, reliable data solutions, including complex data pipelines and distributed systems, using modern frameworks and technologies (e.g., Spark, Kafka, Kubernetes, Databricks, DBT)
  • Drive innovation by evaluating, proposing, and adopting new tools, patterns, and methodologies that improve data quality, performance, and efficiency
  • Apply deep technical expertise in ETL/ELT design, data modeling, data warehousing, and workflow optimization to ensure robust, high-quality data systems
  • Collaborate across teams—partner with engineering, product, analytics, and customer stakeholders to understand requirements and deliver impactful, scalable solutions
  • Mentor and coach junior engineers, fostering growth, knowledge-sharing, and best practices within the data engineering team
  • Contribute to thought leadership through knowledge-sharing, writing technical articles, speaking at meetups or conferences, or representing the team in industry conversations
What we offer
What we offer
  • Health Benefits
  • Generous time away
  • Maternity and Paternity leave
  • Educational resources and reimbursements
  • 401(k) plan with a company contribution
  • Fulltime
Read More
Arrow Right

Principal Data Engineer

Our Platform Engineering Team is working to solve the Multiplicity Problem. We a...
Location
Location
United States , Reston
Salary
Salary:
Not provided
intellibus.com Logo
Intellibus
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • ETL – Experience with ETL processes for data integration
  • SQL – Strong SQL skills for querying and data manipulation
  • Python – Strong command of Python, especially in AWS Boto3, JSON handling, and dictionary operations
  • Unix – Competent in Unix for file operations, searches, and regular expressions
  • AWS – Proficient with AWS services including EC2, Glue, S3, Step Functions, and Lambda for scalable cloud solutions
  • Database Modeling – Solid grasp of database design principles, including logical and physical data models, and change data capture (CDC) mechanisms
  • Snowflake – Experienced in Snowflake for efficient data integration, utilizing features like Snowpipe, Streams, Tasks, and Stored Procedures
  • Airflow – Fundamental knowledge of Airflow for orchestrating complex data workflows and setting up automated pipelines
  • Bachelor's degree in Computer Science, or a related field is preferred. Relevant work experience may be considered in lieu of a degree
  • Excellent communication and interpersonal skills, with the ability to effectively collaborate with cross-functional teams and stakeholders
Job Responsibility
Job Responsibility
  • Design, develop, and maintain data pipelines to ingest, transform, and load data from various sources into Snowflake
  • Implement ETL (Extract, Transform, Load) processes using Snowflake's features such as Snowpipe, Streams, and Tasks
  • Design and implement efficient data models and schemas within Snowflake to support reporting, analytics, and business intelligence needs
  • Optimize data warehouse performance and scalability using Snowflake features like clustering, partitioning, and materialized views
  • Integrate Snowflake with external systems and data sources, including on-premises databases, cloud storage, and third-party APIs
  • Implement data synchronization processes to ensure consistency and accuracy of data across different systems
  • Monitor and optimize query performance and resource utilization within Snowflake using query profiling, query optimization techniques, and workload management features
  • Identify and resolve performance bottlenecks and optimize data warehouse configurations for maximum efficiency
  • Work on Snowflake modeling – roles, databases, schemas, ETL tools with cloud-driven skills
  • Work on SQL performance measuring, query tuning, and database tuning
Read More
Arrow Right