CrawlJobs Logo

L3 Data Engineer

https://www.randstad.com Logo

Randstad

Location Icon

Location:
India , Bengaluru

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

The purpose of the Data Engineer position is to leverage their technical expertise and domain knowledge to design, build, and maintain efficient and robust data solutions. The Data Engineer shall be responsible for the building, testing, and deployment of Data Pipelines on the EDP. The engineer shall ensure that the pipelines are developed and deployed with a secure-by-design approach, delivering robust, thoroughly tested, and maintainable solutions.

Job Responsibility:

  • Build: Pipeline Development
  • Develop pipelines using the standard patterns for data pipelines and workflows utilizing Streamsets, Kestra, dbt, Git
  • Design and Implement data storage and processing solutions employing Snowflake
  • Utilize AWS services for cloud-based platform tooling infrastructure including but not limited to: Lambda,ECS,MSK,RDS,EC2, Secrets Manager, ALB, Cloud Watch, Event Bridge
  • Utilize Terraform for AWS and Azure deployments
  • Leverage and integrate APIs for data access and manipulation
  • Write Python scripts for data common processing and automation tasks
  • Leverage Platform API’s and Web Applications to enforce Platform Security
  • Development experience with Go, SQL, C#, .net, JavaScript, shell scripts & container platforms like Docker
  • The engineer shall have experience integrating with timeseries source systems: Honeywell Plant Historian Database, OSI Pi
  • The engineer shall have experience in Authentication mechanisms including but not limited to (OAuth 2.0, OIDC, Microsoft Entra, Key Pair Authentication, Certificate based authentication, SAML based SSO)
  • Test: Quality Assurance
  • Create and execute comprehensive test plans to ensure the pipelines functionality and performance
  • Develop unit tests, integration tests, and end-to-end tests for data pipelines and workflows
  • Ensure data accuracy and consistency through rigorous testing processes
  • Leverage automated testing processes to enhance efficiency
  • Governance: Compliance & Risk
  • This role requires strict adherence to access process and procedures to maintain Data Privacy and Security
  • Identify and report any potential breaches of the Data Information and Systems Processes
  • Operate: Platform Maintenance
  • Monitor and manage the platform to ensure optimal performance and uptime
  • Conduct regular maintenance tasks such as updates, patches, and backups
  • Resolve any issues or incidents related to the platform in a timely manner
  • Continuously improve platform operations through automation and optimization
  • Strong experience with Windows & Unix like operating systems
  • Security: Secure by Design
  • Implement security best practices throughout the pipeline development and deployment process
  • Conduct regular security reviews and vulnerability assessments
  • Ensure data encryption, access control, and other security measures are enforced
  • Use credential management platforms like Thycotic Secret Server, AWS Secrets Manager
  • Support: Technical Guidance
  • Assist in troubleshooting and resolving intricate technical issues
  • Deliverables: Robust and scalable data pipelines with well-documented code and processes
  • Comprehensive test plans and automation scripts ensuring platform reliability
  • Regular security assessments and compliance reports
  • Technical support and guidance documentation for delivery data engineers
  • Deliver secure, robust, and maintainable data pipelines
  • Ensure high-quality and thoroughly tested data solutions
  • Maintain compliance with security standards and best practices
  • Maintain compliance with the Data Lifecycle Management Process
  • Maintain compliance with the Data Privacy standards and best practices

Requirements:

  • 10yrs Data Engineering/Data Analyst experience
  • Bachelor’s or Master’s degree in Computer Science, Data Science, Information Technology, or a related field with a focus on data engineering or data analytics
  • Strong proficiency in programming languages such as Python, SQL, Java, or Scala for data processing and analysis
  • Experience with data modeling, ETL processes, data warehousing, data integration, and data pipeline development
  • Proficiency in relational databases (e.g., SQL, PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra)
  • Working knowledge of cloud platforms such as Snowflake, AWS, Azure, or Google Cloud Platform for data storage and processing
  • Experience with data visualization tools (e.g., Power BI) to create meaningful insights from data
  • Understanding of data quality principles, data governance, and data validation processes
  • Ability to analyze complex data sets, identify trends, patterns, and insights to drive data-driven decision-making
  • Proficiency in troubleshooting data-related issues, identifying root causes, and implementing solutions
  • Familiarity with project management methodologies to contribute effectively to project planning and execution
  • Strong verbal and written communication skills to collaborate with cross-functional teams and communicate technical concepts to non-technical stakeholders
  • Willingness to stay updated with the latest data technologies, tools, and industry trends to enhance data engineering skills
  • Prior experience in data engineering, data analytics, or related roles with a track record of successful data project delivery
  • Technical Leadership: Ability to make informed, strategic decisions that align technology with business objectives, while balancing short-term and long-term trade-offs
  • Customer Focus: Deep understanding of customer needs and how to translate them into effective technical solutions that drive business value
  • Collaboration: Encourages collaboration across teams and stakeholders, breaking down silos and ensuring alignment
  • Problem-Solving: Strong analytical and problem-solving skills, capable of addressing complex technical challenges and delivering innovative solutions
  • Innovation: Ability to lead innovation in technology while maintaining an eye on product-market fit and user experience
  • Agility: Adaptability in a fast-moving environment, with a mindset focused on delivering high-impact solutions quickly and iteratively
What we offer:
  • Commitment to your ongoing development, including on the job opportunities and formal programs
  • Inclusive parental leave entitlements for both parents
  • Values led culture
  • Flexible work options
  • Generous annual leave, sick leave and casual leave
  • Cultural and religious leave with flexible public holiday opportunities
  • A competitive remuneration package featuring performance based incentives with uncapped Employer Provident Fund

Additional Information:

Job Posted:
March 26, 2026

Expiration:
May 07, 2026

Employment Type:
Fulltime
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for L3 Data Engineer

Lead Data Engineer

Lead Data Engineer to serve as both a technical leader and people coach for our ...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or master’s degree in computer science, Engineering, or related field
  • 8-10 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark
  • Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse
  • Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices
  • Solid grasp of data governance, metadata tagging, and role-based access control
  • Proven ability to mentor and grow engineers in a matrixed or global environment
  • Strong verbal and written communication skills, with the ability to operate cross-functionally
  • Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management)
  • Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools
  • Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance)
Job Responsibility
Job Responsibility
  • Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms
  • Lead the technical execution of non-domain specific initiatives (e.g. reusable dimensions, TLOG standardization, enablement pipelines)
  • Architect data models and re-usable layers consumed by multiple downstream pods
  • Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks
  • Mentoring and coaching team
  • Partner with product and platform leaders to ensure engineering consistency and delivery excellence
  • Act as an L3 escalation point for operational data issues impacting foundational pipelines
  • Own engineering best practices, sprint planning, and quality across the Enablement pod
  • Contribute to platform discussions and architectural decisions across regions
  • Fulltime
Read More
Arrow Right

Lead Data Engineer

Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader i...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or master’s degree in computer science, Engineering, or related field
  • 7-9 years of data engineering experience with strong hands-on delivery using ADF, SQL, Python, Databricks, and Spark
  • Experience designing data pipelines, warehouse models, and processing frameworks using Snowflake or Azure Synapse
  • Proficient with CI/CD tools (Azure DevOps, GitHub) and observability practices
  • Solid grasp of data governance, metadata tagging, and role-based access control
  • Proven ability to mentor and grow engineers in a matrixed or global environment
  • Strong verbal and written communication skills, with the ability to operate cross-functionally
  • Certifications in Azure, Databricks, or Snowflake are a plus
  • Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management)
  • Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools
Job Responsibility
Job Responsibility
  • Design, develop, and maintain scalable pipelines across ADF, Databricks, Snowflake, and related platforms
  • Lead the technical execution of non-domain specific initiatives (e.g. reusable dimensions, TLOG standardization, enablement pipelines)
  • Architect data models and re-usable layers consumed by multiple downstream pods
  • Guide platform-wide patterns like parameterization, CI/CD pipelines, pipeline recovery, and auditability frameworks
  • Mentoring and coaching team
  • Partner with product and platform leaders to ensure engineering consistency and delivery excellence
  • Act as an L3 escalation point for operational data issues impacting foundational pipelines
  • Own engineering best practices, sprint planning, and quality across the Enablement pod
  • Contribute to platform discussions and architectural decisions across regions
  • Fulltime
Read More
Arrow Right

Big Data Engineer

Inetum is seeking a seasoned Big Data Engineer to join our team and support the ...
Location
Location
Portugal , Lisbon
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science or equivalent
  • Minimum 5 years of experience in a similar role
  • Strong expertise in Hadoop Ecosystem & Data Lake Operations, Big Data Services & Query Engines, Security & Access Control in Distributed Environments, Monitoring, Automation & Platform Lifecycle Management
  • Proven ability to diagnose and resolve complex issues in Big Data environments
  • Strong communication skills to collaborate with cross-functional teams
  • English proficiency: B2-C1 level
Job Responsibility
Job Responsibility
  • Operate, monitor, and support Hadoop-based data lake platforms
  • Provide L3 incident response, deep troubleshooting, and performance tuning for big data components
  • Ensure data integrity, replication, and capacity management within HDFS clusters
  • Develop and automate monitoring and alerting for service health and node performance
  • Collaborate with platform and data teams to onboard new workloads with optimal resource allocation
  • Apply patches, upgrades, and configuration changes to maintain platform security and stability
  • Manage Kerberos authentication, Ranger/Sentry policies, and TLS encryption
  • Work with storage, network, and security teams to optimize throughput and access controls
  • Maintain cluster documentation and share knowledge across teams
What we offer
What we offer
  • Opportunity to grow your expertise
  • Certified Top Employer Europe 2025
  • Fulltime
Read More
Arrow Right

Technology Services Engineer – Data Protection & Disaster Recovery

Immediate need for a Data Protection & Disaster Recovery Technical Services Engi...
Location
Location
United States , Alpharetta, Georgia
Salary
Salary:
Not provided
tier4group.com Logo
Tier4 Group
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 2+ years in an MSP setting focused on backup/DR and Windows server environments
  • Deep Veeam proficiency
  • Solid grounding in Windows Server/AD, virtualization (preferably Hyper-V, working knowledge of VMware), storage (SMB/NFS, iSCSI), networking basics, and change control
  • PowerShell and basic API/JSON skills to automate deployments, checks, and reports
  • Security & compliance mindset: RBAC/least privilege, MFA, encryption in transit/at rest, audit artifacts for SOC 2/HIPAA
  • Excellent documentation and incident communications
  • willing to support maintenance windows/on-call
Job Responsibility
Job Responsibility
  • Own backup, restore, and resiliency outcomes for all MSP clients
  • act as the primary technical liaison for backup/DR platforms and service delivery
  • Veeam platform ownership: design, configure, and maintain Veeam Backup & Replication (SOBR, backup copy, replication, Instant Recovery, SureBackup labs)
  • manage repositories, retention, encryption, and job health
  • Immutable off-site copies: build and operate (bucket policies, retention/immutability, lifecycle/usage controls) as the off-site tier
  • Monitoring & compliance reporting: implement and tune end-to-end success/failure monitoring, alerting/escalation, daily health checks, and compliance evidence packs
  • 3-2-1 architectures: design and run three-copy / two-media / one off-site strategies using NAS appliances (QNAP/Synology) for local copy and off-site
  • document RPO/RTO per workload
  • Recovery testing & documentation: execute regular restore drills (file/VM/app-item, Instant Recovery, SureBackup verification), record results, and maintain DR runbooks with clear owners and contact trees
  • Incident response & escalation: lead backup/restore and DR events (containment, comms, status cadence, executive updates), perform RCA, and drive corrective and preventive actions
What we offer
What we offer
  • Competitive salary
  • comprehensive benefits (medical, dental, vision, life, disability, 401(k) match)
  • robust PTO
  • Fulltime
Read More
Arrow Right

Senior Full Stack Java Developer

Citi is looking for a Senior Full Stack Java Developer to join the FX Data Analy...
Location
Location
United Kingdom , London
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master’s degree or above (or equivalent education) in a STEM discipline
  • proven experience in software engineering and development, and a strong understanding of computer systems and how they operate
  • hands-on experience in Java, Spark, Scala (or Java)
  • production scale hands-on experience to write data pipelines using Spark or any other distributed real time or batch processing
  • strong skill set in SQL or databases
  • strong understanding of messaging technologies like Kafka, Solace, MQ etc.
  • writing production scale applications to use caching technologies
  • understanding of data virtualization
  • production management (L3 support) experience.
Job Responsibility
Job Responsibility
  • Engineer data and analytics pipelines using modern, cloud-native technologies and CI/CD workflows, focusing on consolidation, automation, and scalability
  • collaborate with stakeholders across sales and trading to understand data needs, translate them into impactful data-driven solutions, and deliver these in partnership with technology
  • develop and integrate functionality to ensure adherence with best-practices in terms of data management, need-to-know (NTK), and data governance
  • contribute to shaping and executing the overall data strategy for FX in collaboration with the existing team and senior stakeholders
  • closely work with FX desks in understanding the requirements and translating into simple and efficient design
  • close interaction with Traders and Quants to understand new requirements for applications across the platform
  • design, development, testing of new features in the applications
  • continual improvement of the software development lifecycle and quality of the product
  • help deliver large scale projects through hands-on development and technical leadership
  • 3rd line support of the production system (dedicated 24h support teams handle 1st or 2nd line)
What we offer
What we offer
  • 27 days annual leave (plus bank holidays)
  • a discretional annual performance related bonus
  • private medical care and life insurance
  • employee assistance program
  • pension plan
  • paid parental leave
  • special discounts for employees, family, and friends
  • access to an array of learning and development resources.
  • Fulltime
Read More
Arrow Right

Data Engineer II

The Data Engineer is responsible for managing, operating, and supporting cloud-b...
Location
Location
India , Gurgaon
Salary
Salary:
Not provided
rackspace.com Logo
Rackspace
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong hands-on experience with: Azure Data Lake Storage
  • Azure Databricks
  • Azure Data Factory
  • Azure Synapse Analytics
  • Snowflake
  • Azure Event Hub
  • Tidal Scheduler
  • Pyramid, Collibra
  • Power BI, Tableau, MicroStrategy (MSTR), Alteryx
  • Ability to read, write, and troubleshoot SQL queries and stored procedures
Job Responsibility
Job Responsibility
  • Resolve data pipeline issues and perform proactive monitoring for sensitive and critical batch processes
  • Conduct Root Cause Analysis (RCA) and retrospectives for incidents and operational issues
  • Create, update, and maintain operational runbooks and manage entitlements as required
  • Implement configuration changes to batch processes and underlying cloud infrastructure
  • Manually control and manage pipeline executions during scheduled maintenance windows
  • Perform on-demand operational changes based on instructions provided by the Data Engineering (L3) team
  • Handle user service requests
  • for non-runbook-based requests, coordinate effectively with the L3 team
  • Participate in Disaster Recovery (DR) testing activities (twice a year)
  • Support development and production data synchronization activities
  • Fulltime
Read More
Arrow Right

Senior AI Software Developer

The Senior AI Engineer owns end-to-end delivery of AI features—from design to pr...
Location
Location
Salary
Salary:
Not provided
https://www.hpe.com/ Logo
Hewlett Packard Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's or master’s degree in computer science, engineering, data science, machine learning, artificial intelligence, or closely related quantitative discipline
  • Typically, 7-10 years’ experience
  • LLMs & Agents: Prompt engineering, function/tool calling, orchestration frameworks, RAG
  • ML/DS: Evaluation metrics (precision/recall, BLEU/ROUGE where relevant), error analysis
  • Data/RAG: Embeddings, similarity (cosine/IP), chunking, rerankers, vector DB operations
  • Backend: Python (FastAPI/Flask), microservices patterns
  • MLOps/Infra: Docker, Kubernetes, CI/CD, artifact management, GPU scheduling
  • Observability: Metrics/logging/tracing, dashboards, automated evaluation pipelines
  • Frameworks: PyTorch/TensorFlow, Hugging Face, LangChain/LlamaIndex
  • Data: Pandas, SQL/NoSQL, Parquet/Arrow, Kafka/queues
Job Responsibility
Job Responsibility
  • Translate high-level designs into clear component contracts, APIs, and service boundaries
  • Implement LLM integrations, RAG pipelines, agents, tool/function calling, and prompt strategies
  • Own feature delivery for sprints/releases
  • maintain high code quality and documentation
  • Fine-tune models when needed
  • design evaluation harnesses and metrics
  • Build A/B testing setups
  • track accuracy, latency, robustness, and task success rates
  • Conduct error analysis
  • iterate using feedback efficacy loops and prompt refinement
What we offer
What we offer
  • Health & Wellbeing
  • Personal & Professional Development
  • Unconditional Inclusion
Read More
Arrow Right

Engineering Manager

Lead and manage the Avis Budget Group India Data Platform Engineering team, prov...
Location
Location
India , Bangalore
Salary
Salary:
Not provided
avisbudgetgroup.com Logo
avis budget group
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 13+ years of experience in data platform engineering, including 5+ years in a people-management role
  • Advanced expertise in AWS services (RDS, Redshift, DMS, Lambda), OCI cloud data services and infrastructure-as-code
  • Proficiency with IBM CDC replication technologies, Confluent Kafka and IBM Voltage encryption solutions
  • Strong background in data modelling, ETL design, database tuning and big data solutions (NoSQL, streaming)
  • Strong expertise in NoSQL database (preferably Couchbase) cluster management, including version upgrades, security patching, performance tuning, and ensuring high availability through node scaling and rebalancing
  • Hands-on experience with Python/Shell scripting, Terraform, CloudFormation and Jenkins-based CI/CD
  • Solid understanding of Linux/Unix environments, networking protocols and production system ownership
  • Excellent communication skills and ability to collaborate across IST and US East time zones
Job Responsibility
Job Responsibility
  • Lead configuration and design of database systems and data integration services, ensuring PII, GDPR, encryption, and security best practices
  • Develop and maintain robust data pipelines on AWS and OCI clouds for ingestion, transformation and storage
  • Implement and maintain IBM CDC target environments and Confluent Kafka clusters to meet strong performance requirements
  • Handle data migrations using AWS DMS, ensuring data encryption at rest and in motion with IBM Voltage
  • Drive POCs and rollout of cloud-based solutions with infrastructure-as-code (Terraform/CloudFormation) and CI/CD pipelines
  • Oversee data modelling standards, metadata management, and data quality practices across the platform
  • Provide L3 support for critical production systems, enhance observability stacks (Dynatrace, AWS CloudWatch) and enforce operational excellence
  • Manage, mentor and coach a team of Data Platform Engineers, fostering a collaborative environment for innovation and growth
  • Engage with global data teams and stakeholders to deliver secure, cost-optimized, and high-impact platform solutions
  • Fulltime
Read More
Arrow Right