CrawlJobs Logo

Senior Analyst - Data Engineer

pumaenergy.com Logo

Puma Energy

Location Icon

Location:
India , Mumbai

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

Main Purpose: Collaborate with data scientists and business stakeholders to design, develop, and maintain efficient data pipelines feeding into the organization's data lake. Maintain the integrity and quality of the data lake, enabling accurate and actionable insights for data scientists and informed decision-making for business stakeholders. Utilize extensive knowledge of data engineering and cloud technologies to enhance the organization’s data infrastructure, promoting a culture of data-driven decision-making. Apply data engineering expertise to define and optimize data pipelines using advanced concepts to improve the efficiency and accessibility of data storage. Own the development of an extensive data catalog, ensuring robust data governance and facilitating effective data access and utilization across the organization.

Job Responsibility:

  • Collaborate with data scientists and business stakeholders to design, develop, and maintain efficient data pipelines feeding into the organization's data lake
  • Maintain the integrity and quality of the data lake, enabling accurate and actionable insights for data scientists and informed decision-making for business stakeholders
  • Utilize extensive knowledge of data engineering and cloud technologies to enhance the organization’s data infrastructure, promoting a culture of data-driven decision-making
  • Apply data engineering expertise to define and optimize data pipelines using advanced concepts to improve the efficiency and accessibility of data storage
  • Own the development of an extensive data catalog, ensuring robust data governance and facilitating effective data access and utilization across the organization
  • Collaborate with stakeholders (data scientists, analysts, product teams) to translate business requirements into Databricks-native data solutions

Requirements:

  • Contribute to the development of scalable and performant data pipelines on Databricks, leveraging Delta Lake, Delta Live Tables (DLT), and other core Databricks components
  • Develop data lakes/warehouses designed for optimized storage, querying, and real-time updates using Delta Lake
  • Implement effective data ingestion strategies from various sources (streaming, batch, API-based), ensuring seamless integration with Databricks
  • Ensure the integrity, security, quality, and governance of data across our Databricks-centric platforms
  • Build and maintain ETL/ELT processes, heavily utilizing Databricks, Spark (Scala or Python), SQL, and Delta Lake for transformations
  • Experience with CI/CD and DevOps practices specifically tailored for the Databricks environment
  • Monitor and optimize the cost-efficiency of data operations on Databricks, ensuring optimal resource utilization
  • Utilize a range of Databricks tools, including the Databricks CLI and REST API, alongside Apache Spark™, to develop, manage, and optimize data engineering solutions

Additional Information:

Job Posted:
January 09, 2026

Employment Type:
Fulltime
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Analyst - Data Engineer

Senior Data Engineer

Our Data Engineering Team is comprised of data experts. We build world-class dat...
Location
Location
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A BS in Computer Science or equivalent experience with 5+ years of professional experience as a Sr. Data Engineer or in a similar role
  • Strong programming skills using Python
  • Working knowledge of relational databases and query authoring (SQL)
  • Experience designing data models for optimal storage and retrieval to meet product and business requirements
  • Experience building scalable data pipelines using Spark (SparkSQL) with Airflow scheduler/executor framework or similar scheduling tools
  • Experience working with AWS data services or similar Apache projects (Spark, Flink, Hive, and Kafka)
  • Understanding of Data Engineering tools/frameworks and standards to improve the productivity and quality of output for Data Engineers across the team
  • Well versed in modern software development practices (Agile, TDD, CICD)
Job Responsibility
Job Responsibility
  • You'll partner with the product analytics and data scientist team to build the data solutions that allow them to obtain more insights from our data and use that to support important business decisions
  • You'll work with different stakeholders to understand their needs and architect/build the data models, data acquisition/ingestion processes and data applications to address those requirements
  • You'll add new sources, code business rules, and produce new metrics that support the product analysts and data scientists
  • You'll be the data domain expert who understand all the nitty-gritty of our products
  • You'll own a problem end-to-end
  • You'll improve data quality by using & improving internal tools/frameworks to automatically detect DQ issues
What we offer
What we offer
  • health and wellbeing resources
  • paid volunteer days
Read More
Arrow Right

Senior Data Engineer

The mission of the business intelligence team is to create a data-driven culture...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
https://www.randstad.com Logo
Randstad
Expiration Date
February 28, 2026
Flip Icon
Requirements
Requirements
  • Master’s degree in Computer Science / Information Technology or related field, highly preferred
  • Extensive knowledge of BI concepts and related technologies that help drive sustainable technical solutions
  • Extensive Experience with data lakes, ETL and data warehouses
  • Advanced experience of building data pipelines
  • Passion for building quality BI software
  • Project Management and/or process improvement experience highly preferred
  • Polyglot coder and expert level in multiple languages including, Python, R, Java, SQL, relational databases, ERP, DOMO or other data visualization tools i.e. Tableau
  • Advanced and proven experience with Google cloud platform (GCP) is preferred. But experience with Microsoft Azure / Amazon will be considered
  • Any exposure to Kafka, Spark, and Scala will be an added advantage
  • Should demonstrate a strong understanding of OOPS concepts and methodologies
Job Responsibility
Job Responsibility
  • Architect and build complex data pipelines using advanced cloud data technologies
  • Lead efforts to optimize data pipelines for performance, scalability, and cost-efficiency
  • Define industry best practices for building data pipelines
  • Ensure data security, compliance, and governance standards are met
  • Partner with leadership team to define and implement agile and DevOps methodologies
  • Serve as subject matter expert and define data architecture and infrastructure requirements
  • Partner with business analysts to plan project execution including appropriate product and technical specifications, direction, resources, and establishing realistic completion times
  • Understand data technology trends and identify opportunities to implement new technologies and provide forward-thinking recommendations
  • Proactively partner with internal stakeholders to bridge gaps, provide historical references, and design the appropriate processes
  • Design and implement a robust data observability process
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

As a Senior Data Engineer on our Core Engineering Data Team, you will design and...
Location
Location
United States , Boston
Salary
Salary:
111800.00 - 164000.00 USD / Year
simplisafe.com Logo
SimpliSafe
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related field or equivalent practical experience
  • 4+ years of experience in software engineering, data engineering, or a related field, with at least 2 years focused on data operations or data infrastructure
  • Strong knowledge of AWS or other public cloud platforms (e.g., Azure, GCP)
  • Strong SQL knowledge and experience optimizing for data warehousing technologies like AWS Athena
  • Strong knowledge of Python for use in data transformation
  • Hands-on experience with ETL/ELT, schema design, and datalake technologies
  • Hands-on experience with data orchestration tools like Dagster, Airflow, or Prefect
  • Experience with CI/CD pipelines, Docker, Kubernetes, and infrastructure-as-code tools (e.g., Terraform, CloudFormation)
  • Familiarity with various data and table formats (JSON, Avro, Parquet, Iceberg)
  • Love of data and a passion for building reliable data products
Job Responsibility
Job Responsibility
  • Collaborate with analysts, engineers, product managers, and stakeholders to design and implement solutions for Product and Engineering data workflows
  • Identify areas for improvement and contribute to centralized data platform
  • Manage data pipeline, orchestration, storage, and analytics infrastructure for Product and Engineering
  • Monitor performance and reliability of data pipelines, implementing solutions for scalability and efficiency
  • Optimize table structures to support query and usage patterns
  • Partner with producers of data across SimpliSafe to develop an understanding of data creation and meaning
  • Support data discovery, catalog, and analytics tooling
  • Implement and maintain data security measures and ensure compliance with data governance policies
  • Design and implement testing strategies and data quality validation to ensure the accuracy, reliability, and integrity of data pipelines
  • Contribute to the formation of our new team, assisting with the development of team norms, practices, and charter
What we offer
What we offer
  • A mission- and values-driven culture and a safe, inclusive environment where you can build, grow and thrive
  • A comprehensive total rewards package that supports your wellness and provides security for SimpliSafers and their families
  • Free SimpliSafe system and professional monitoring for your home
  • Employee Resource Groups (ERGs) that bring people together, give opportunities to network, mentor and develop, and advocate for change
  • Participation in our annual bonus program, equity, and other forms of compensation, in addition to a full range of medical, retirement, and lifestyle benefits
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a Data Engineer to join our team and support with designing, ...
Location
Location
Salary
Salary:
Not provided
foundever.com Logo
Foundever
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Minimum of 7 years plus experience in data engineering
  • Track record of deploying and maintaining complex data systems at an enterprise level within regulated environments
  • Expertise in implementing robust data security measures, access controls, and monitoring systems
  • Proficiency in data modeling and database management
  • Strong programming skills in Python and SQL
  • Knowledge of big data technologies like Hadoop, Spark, and NoSQL databases
  • Deep experience with ETL processes and data pipeline development
  • Strong understanding of data warehousing concepts and best practices
  • Experience with cloud platforms such as AWS and Azure
  • Excellent problem-solving skills and attention to detail
Job Responsibility
Job Responsibility
  • Design and optimize complex data storage solutions, including data warehouses and data lakes
  • Develop, automate, and maintain data pipelines for efficient and scalable ETL processes
  • Ensure data quality and integrity through data validation, cleansing, and error handling
  • Collaborate with data analysts, machine learning engineers, and software engineers to deliver relevant datasets or data APIs for downstream applications
  • Implement data security measures and access controls to protect sensitive information
  • Monitor data infrastructure for performance and reliability, addressing issues promptly
  • Stay abreast of industry trends and emerging technologies in data engineering
  • Document data pipelines, processes, and best practices for knowledge sharing
  • Lead data governance and compliance efforts to meet regulatory requirements
  • Collaborate with cross-functional teams to drive data-driven decision-making within the organization
What we offer
What we offer
  • Impactful work
  • Professional growth
  • Competitive compensation
  • Collaborative environment
  • Attractive salary and benefits package
  • Continuous learning and development opportunities
  • A supportive team culture with opportunities for occasional travel for training and industry events
Read More
Arrow Right

Senior Data Engineer

The Data Engineer will build scalable pipelines and data models, implement ETL w...
Location
Location
United States , Fort Bragg
Salary
Salary:
Not provided
barbaricum.com Logo
Barbaricum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Active DoD TS/SCI clearance (required or pending verification)
  • Bachelor’s degree in Computer Science, Data Science, Engineering, or related field (or equivalent experience) OR CSSLP / CISSP-ISSAP
  • Strong programming skills in Python, Java, or Scala
  • Strong SQL skills
  • familiarity with analytics languages/tools such as R
  • Experience with data processing frameworks (e.g., Apache Spark, Hadoop) and orchestration tools (e.g., Airflow)
  • Familiarity with cloud-based data services (e.g., AWS Redshift, Google BigQuery, Azure Data Factory)
  • Experience with data modeling, database design, and data architecture concepts
  • Strong analytical and problem-solving skills with attention to detail
  • Strong written and verbal communication skills
Job Responsibility
Job Responsibility
  • Build and maintain scalable, reliable data pipelines to collect, process, and store data from multiple sources
  • Design and implement ETL processes to support analytics, reporting, and operational needs
  • Develop and maintain data models, schemas, and standards to support enterprise data usage
  • Collaborate with data scientists, analysts, and stakeholders to understand requirements and deliver solutions
  • Analyze large datasets to identify trends, patterns, and actionable insights
  • Present findings and recommendations through dashboards, reports, and visualizations
  • Optimize database and pipeline performance for scalability and reliability across large datasets
  • Monitor and troubleshoot pipeline issues to minimize downtime and improve system resilience
  • Implement data quality checks, validation routines, and integrity controls
  • Implement security measures to protect data and systems from unauthorized access
Read More
Arrow Right

Senior Data Engineer II

We are looking for a skilled Data Engineer to join our growing team. You will pl...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
alterdomus.com Logo
Alter Domus
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field
  • 3+ years of experience as a Data Engineer or in a similar role working with cloud-based data platforms
  • Technical Skills: Cloud & Orchestration: Airflow (self-managed or managed services like Amazon MWAA) for workflow orchestration, DAG development, and scheduling
  • Familiarity with best practices for Airflow DAG structure, dependency management, and error handling
  • AWS Expertise: Hands-on experience with AWS Lake Formation, S3, Athena, and related services (e.g., Lambda, Glue, IAM)
  • Snowflake: Proficient in setting up data warehouses, configuring security, and optimizing queries on Snowflake
  • Data Ingestion & Transformation: Experience with Airbyte or similar tools for data ingestion
  • dbt or other SQL-based transformation frameworks for modular data processing
  • Programming: Proficiency in Python and/or Java/Scala for building data pipelines and custom integrations
  • Query Languages: Advanced knowledge of SQL for data manipulation and analysis
Job Responsibility
Job Responsibility
  • Data Pipeline Orchestration: Design, build, and maintain end-to-end data pipelines using Airflow (including managed services like Amazon MWAA) to orchestrate, schedule, and monitor batch/streaming workflows
  • Implement DAGs (Directed Acyclic Graphs) with retry logic, error handling, and alerting to ensure data quality and pipeline reliability
  • Data Ingestion & Transformation: Integrate data from various sources using Airbyte for ingestion and dbt for transformations in a scalable and modular fashion
  • Collaborate with Data Analysts and Data Scientists to implement transformations and business logic, ensuring data is analytics-ready
  • Data Modeling & Warehousing: Design and implement efficient data models for both structured and semi-structured data in AWS S3 (data lake) and Snowflake (data warehouse)
  • Ensure data schemas and transformations support advanced analytics, BI reporting, and machine learning use cases
  • Data Governance & Security: Utilize AWS Lake Formation APIs and best practices to maintain data security, access controls, and compliance
  • Work closely with IT security to establish robust encryption standards, audit trails, and identity/role-based access
  • Performance Optimization: Optimize AWS Athena queries and configurations (e.g., data partitioning) for performance and cost efficiency
  • Monitor and tune Airflow DAGs, Snowflake queries, and data transformations to improve throughput and reliability
What we offer
What we offer
  • Support for professional accreditations
  • Flexible arrangements, generous holidays, plus an additional day off for your birthday
  • Continuous mentoring along your career progression
  • Active sports, events and social committees across our offices
  • 24/7 support available from our Employee Assistance Program
  • The opportunity to invest in our growth and success through our Employee Share Plan
  • Plus additional local benefits depending on your location
Read More
Arrow Right

Senior Data Engineer

Adtalem is a data driven organization. The Data Engineering team builds data sol...
Location
Location
United States , Lisle
Salary
Salary:
84835.61 - 149076.17 USD / Year
adtalem.com Logo
Adtalem Global Education
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field.
  • Master's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field.
  • Two (2) plus years experience in Google cloud with services like BigQuery, Composer, GCS, DataStream, Dataflows, BQML, Vertex AI.
  • Six (6) plus years experience in data engineering solutions such as data platforms, ingestion, data management, or publication/analytics.
  • Hands-on experience working with real-time, unstructured, and synthetic data.
  • Experience in Real Time Data ingestion using GCP PubSub, Kafka, Spark or similar.
  • Expert knowledge on Python programming and SQL.
  • Experience with cloud platforms (AWS, GCP, Azure) and their data services
  • Experience working with Airflow as workflow management tools and build operators to connect, extract and ingest data as needed.
  • Familiarity with synthetic data generation and unstructured data processing
Job Responsibility
Job Responsibility
  • Architect, develop, and optimize scalable data pipelines handling real-time, unstructured, and synthetic datasets
  • Collaborate with cross-functional teams, including data scientists, analysts, and product owners, to deliver innovative data solutions that drive business growth.
  • Design, develop, deploy and support high performance data pipelines both inbound and outbound.
  • Model data platform by applying the business logic and building objects in the semantic layer of the data platform.
  • Leverage streaming technologies and cloud platforms to enable real-time data processing and analytics
  • Optimize data pipelines for performance, scalability, and reliability.
  • Implement CI/CD pipelines to ensure continuous deployment and delivery of our data products.
  • Ensure quality of critical data elements, prepare data quality remediation plans and collaborate with business and system owners to fix the quality issues at its root.
  • Document the design and support strategy of the data pipelines
  • Capture, store and socialize data lineage and operational metadata
What we offer
What we offer
  • Health, dental, vision, life and disability insurance
  • 401k Retirement Program + 6% employer match
  • Participation in Adtalem’s Flexible Time Off (FTO) Policy
  • 12 Paid Holidays
  • Eligible to participate in an annual incentive program
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Adtalem is a data driven organization. The Data Engineering team builds data sol...
Location
Location
United States , Lisle
Salary
Salary:
85000.00 - 150000.00 USD / Year
adtalem.com Logo
Adtalem Global Education
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree Computer Science, Computer Engineering, Software Engineering, or another related technical field Required
  • Master's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field Preferred
  • 2+ years experience in Google cloud with services like BigQuery, Composer, GCS, DataStream, Dataflows Required
  • 6+ years experience in data engineering solutions such as data platforms, ingestion, data management, or publication/analytics Required
  • Expert knowledge on SQL and Python programming
  • Experience working with Airflow as workflow management tools and build operators to connect, extract and ingest data as needed
  • Experience in tuning queries for performance and scalability
  • Experience in Real Time Data ingestion using GCP PubSub, Kafka, Spark or similar
  • Excellent organizational, prioritization and analytical abilities
  • Have proven experience working in incremental execution through successful launches
Job Responsibility
Job Responsibility
  • Work closely with various business, IT, Analyst and Data Science groups to collect business requirements
  • Design, develop, deploy and support high performance data pipelines both inbound and outbound
  • Model data platform by applying the business logic and building objects in the semantic layer of the data platform
  • Optimize data pipelines for performance, scalability, and reliability
  • Implement CI/CD pipelines to ensure continuous deployment and delivery of our data products
  • Ensure quality of critical data elements, prepare data quality remediation plans and collaborate with business and system owners to fix the quality issues at its root
  • Document the design and support strategy of the data pipelines
  • Capture, store and socialize data lineage and operational metadata
  • Troubleshoot and resolve data engineering issues as they arise
  • Develop REST APIs to expose data to other teams within the company
What we offer
What we offer
  • Health, dental, vision, life and disability insurance
  • 401k Retirement Program + 6% employer match
  • Participation in Adtalem’s Flexible Time Off (FTO) Policy
  • 12 Paid Holidays
  • Fulltime
Read More
Arrow Right