CrawlJobs Logo

Senior Data Engineer (DATABRICKS)

mastercard.com Logo

Mastercard

Location Icon

Location:
Ireland , Dublin 18

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

As a Senior Databricks Administrator, you will be responsible for the setup, configuration, administration, and optimization of the Databricks Platform on AWS. This role will play a critical part in managing secure, scalable, and high-performing Databricks environments, with a strong focus on governance, user access management, cost optimization, and platform operations. You will collaborate closely with engineering, infrastructure, and compliance teams to ensure that the Databricks platform meets enterprise data and regulatory requirements.

Job Responsibility:

  • Databricks Platform Administration (AWS) Provision, configure, and manage Databricks Workspaces across multiple environments (Dev, UAT, Prod)
  • Define and enforce cluster policies, compute tagging, and usage controls to optimize cost and resource utilization
  • Administer Unity Catalog, including creation and management of catalogs, schemas, access control, and data lineage
  • Manage SCIM integrations for user/group provisioning with IdPs like Okta or Azure/AWS SSO
  • Configure and maintain CI/CD pipelines for notebook deployment and workflow promotion using GitHub, GitLab, or similar tools
  • Security, Governance & Compliance Implement and enforce role-based access controls (RBAC) and fine-grained data permissions using Unity Catalog and Lake Formation
  • Ensure auditability and lineage tracking for compliance with Open Banking, GDPR, and PSD2 regulations
  • Configure and manage token scopes, secrets management, and credential passthrough for secure access to underlying data
  • Work with InfoSec and compliance teams to ensure platform security aligns with regulatory frameworks
  • Monitoring, Support & Troubleshooting Monitor workspace performance, cluster health, job execution, and user activity using Databricks native tools, CloudWatch, and third-party integrations
  • Provide L2/L3 support for Databricks usage issues, including notebook errors, job failures, and workspace anomalies
  • Maintain operational runbooks, automation scripts, and alerting mechanisms for platform health and governance events
  • Automation & Best Practices Build and manage Terraform/IaC scripts for environment provisioning and infrastructure consistency
  • Define naming conventions, resource tagging standards, and workspace governance guidelines
  • Promote standardization of reusable notebook templates, shared cluster configurations, and workflow orchestrations
  • Drive internal knowledge sharing, onboarding, and platform enablement sessions

Requirements:

  • 6+ years of experience in Databricks administration on AWS or multi-cloud environments
  • Deep understanding of Databricks workspace architecture, Unity Catalog, and cluster configuration best practices
  • Strong experience in managing IAM policies, SCIM integration, and access provisioning workflows
  • Hands-on experience with monitoring, cost optimization, and governance of large-scale Databricks deployments
  • Hands-on experience with infrastructure-as-code (Terraform) and CI/CD pipelines
  • Experience with ETL orchestration and collaboration with engineering teams (Databricks Jobs, Workflows, Airflow)

Nice to have:

  • Understanding of Delta Lake, Lakehouse architecture, and data governance patterns
  • Experience in AWS Glue and database technologies SQL (Amazon Aurora) and NoSQL (MongoDB/DynamoDB)
  • Exposure to regulated industries such as banking, fintech, or healthcare
  • Experience with incident response, audit logging, and security remediation

Additional Information:

Job Posted:
February 01, 2026

Employment Type:
Fulltime
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Data Engineer (DATABRICKS)

Senior Data Engineering Manager

Data is a big deal at Atlassian. We ingest billions of events each month into ou...
Location
Location
United States , San Francisco
Salary
Salary:
168700.00 - 271100.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • stellar people management skills and experience in leading an agile software team
  • thrive when developing phenomenal people, not just great products
  • worked closely Data Science, analytics and platform teams
  • expertise in building and maintaining high-quality components and services
  • able to drive technical excellence, pushing for innovation and quality
  • at least 10 years experience in a software development role as an individual contributor
  • 4+ years of people management experience
  • deep understanding of data challenges at scale challenges and the eco-system
  • experience with solution building and architecting with public cloud offerings such as Amazon Web Services, DynamoDB, ElasticSearch, S3, Databricks, Spark/Spark-Streaming, GraphDatabases
  • experience with Enterprise Data architectural standard methodologies
Job Responsibility
Job Responsibility
  • build and lead a team of data engineers through hiring, coaching, mentoring, and hands-on career development
  • provide deep technical guidance in a number of aspects of data engineering in a scalable ecosystem
  • champion cultural and process improvements through engineering excellence, quality and efficiency
  • work with close counterparts in other departments as part of a multi-functional team, and build this culture in your team
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Collaborate with engineering and TPM leaders, developers, and process engineers ...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • BS in Computer Science or equivalent experience with 8+ years as a Senior Data Engineer or similar role
  • 10+ Years of progressive experience in building scalable datasets and reliable data engineering practices.
  • Proficiency in Python, SQL, and data platforms like DataBricks
  • Proficiency in relational databases and query authoring (SQL).
  • Demonstrable expertise designing data models for optimal storage and retrieval to meet product and business requirements.
  • Experience building and scaling experimentation practices, statistical methods, and tools in a large scale organization
  • Excellence in building scalable data pipelines using Spark (SparkSQL) with Airflow scheduler/executor framework or similar scheduling tools.
  • Expert experience working with AWS data services or similar Apache projects (Spark, Flink, Hive, and Kafka).
  • Understanding of Data Engineering tools/frameworks and standards to improve the productivity and quality of output for Data Engineers across the team.
  • Well versed in modern software development practices (Agile, TDD, CICD)
Job Responsibility
Job Responsibility
  • Collaborate with engineering and TPM leaders, developers, and process engineers to create data solutions that extract actionable insights from incident and post-incident management data, supporting objectives of incident prevention and reducing detection, mitigation, and communication times.
  • Work with diverse stakeholders to understand their needs and design data models, acquisition processes, and applications that meet those requirements.
  • Add new sources, implement business rules, and generate metrics to empower product analysts and data scientists.
  • Serve as the data domain expert, mastering the details of our incident management infrastructure.
  • Take full ownership of problems from ambiguous requirements through rapid iterations.
  • Enhance data quality by leveraging and refining internal tools and frameworks to automatically detect issues.
  • Cultivate strong relationships between teams that produce data and those that build insights.
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

We are looking for a Senior Data Engineer (SDE 3) to build scalable, high-perfor...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
https://cogoport.com/ Logo
Cogoport
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in data engineering, working with large-scale distributed systems
  • Strong proficiency in Python, Java, or Scala for data processing
  • Expertise in SQL and NoSQL databases (PostgreSQL, Cassandra, Snowflake, Apache Hive, Redshift)
  • Experience with big data processing frameworks (Apache Spark, Flink, Hadoop)
  • Hands-on experience with real-time data streaming (Kafka, Kinesis, Pulsar) for logistics use cases
  • Deep knowledge of AWS/GCP/Azure cloud data services like S3, Glue, EMR, Databricks, or equivalent
  • Familiarity with Airflow, Prefect, or Dagster for workflow orchestration
  • Strong understanding of logistics and supply chain data structures, including freight pricing models, carrier APIs, and shipment tracking systems
Job Responsibility
Job Responsibility
  • Design and develop real-time and batch ETL/ELT pipelines for structured and unstructured logistics data (freight rates, shipping schedules, tracking events, etc.)
  • Optimize data ingestion, transformation, and storage for high availability and cost efficiency
  • Ensure seamless integration of data from global trade platforms, carrier APIs, and operational databases
  • Architect scalable, cloud-native data platforms using AWS (S3, Glue, EMR, Redshift), GCP (BigQuery, Dataflow), or Azure
  • Build and manage data lakes, warehouses, and real-time processing frameworks to support analytics, machine learning, and reporting needs
  • Optimize distributed databases (Snowflake, Redshift, BigQuery, Apache Hive) for logistics analytics
  • Develop streaming data solutions using Apache Kafka, Pulsar, or Kinesis to power real-time shipment tracking, anomaly detection, and dynamic pricing
  • Enable AI-driven freight rate predictions, demand forecasting, and shipment delay analytics
  • Improve customer experience by providing real-time visibility into supply chain disruptions and delivery timeline
  • Ensure high availability, fault tolerance, and data security compliance (GDPR, CCPA) across the platform
What we offer
What we offer
  • Work with some of the brightest minds in the industry
  • Entrepreneurial culture fostering innovation, impact, and career growth
  • Opportunity to work on real-world logistics challenges
  • Collaborate with cross-functional teams across data science, engineering, and product
  • Be part of a fast-growing company scaling next-gen logistics platforms using advanced data engineering and AI
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

At Ingka Investments (Part of Ingka Group – the largest owner and operator of IK...
Location
Location
Netherlands , Leiden
Salary
Salary:
Not provided
https://www.ikea.com Logo
IKEA
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Formal qualifications (BSc, MSc, PhD) in computer science, software engineering, informatics or equivalent
  • Minimum 3 years of professional experience as a (Junior) Data Engineer
  • Strong knowledge in designing efficient, robust and automated data pipelines, ETL workflows, data warehousing and Big Data processing
  • Hands-on experience with Azure data services like Azure Databricks, Unity Catalog, Azure Data Lake Storage, Azure Data Factory, DBT and Power BI
  • Hands-on experience with data modeling for BI & ML for performance and efficiency
  • The ability to apply such methods to solve business problems using one or more Azure Data and Analytics services in combination with building data pipelines, data streams, and system integration
  • Experience in driving new data engineering developments (e.g. apply new cutting edge data engineering methods to improve performance of data integration, use new tools to improve data quality and etc.)
  • Knowledge of DevOps practices and tools including CI/CD pipelines and version control systems (e.g., Git)
  • Proficiency in programming languages such as Python, SQL, PySpark and others relevant to data engineering
  • Hands-on experience to deploy code artifacts into production
Job Responsibility
Job Responsibility
  • Contribute to the development of D&A platform and analytical tools, ensuring easy and standardized access and sharing of data
  • Subject matter expert for Azure Databrick, Azure Data factory and ADLS
  • Help design, build and maintain data pipelines (accelerators)
  • Document and make the relevant know-how & standard available
  • Ensure pipelines and consistency with relevant digital frameworks, principles, guidelines and standards
  • Support in understand needs of Data Product Teams and other stakeholders
  • Explore ways create better visibility on data quality and Data assets on the D&A platform
  • Identify opportunities for data assets and D&A platform toolchain
  • Work closely together with partners, peers and other relevant roles like data engineers, analysts or architects across IKEA as well as in your team
What we offer
What we offer
  • Opportunity to develop on a cutting-edge Data & Analytics platform
  • Opportunities to have a global impact on your work
  • A team of great colleagues to learn together with
  • An environment focused on driving business and personal growth together, with focus on continuous learning
  • Fulltime
Read More
Arrow Right

Senior Data Engineering Manager

Data is a big deal at Atlassian. We ingest billions events each month into our a...
Location
Location
United States , San Francisco
Salary
Salary:
168700.00 - 271100.00 USD / Year
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • stellar people management skills and experience in leading an agile software team
  • thrive when developing phenomenal people, not just great products
  • worked closely Data Science, analytics and platform teams
  • expertise in building and maintaining high-quality components and services
  • able to drive technical excellence, pushing for innovation and quality
  • at least 10 years experience in a software development role as an individual contributor
  • 4+ years of people management experience
  • deep understanding of data challenges at scale challenges and the eco-system
  • experience with solution building and architecting with public cloud offerings such as Amazon Web Services, DynamoDB, ElasticSearch, S3, Databricks, Spark/Spark-Streaming, GraphDatabases
  • experience with Enterprise Data architectural standard methodologies
Job Responsibility
Job Responsibility
  • build and lead a team of data engineers through hiring, coaching, mentoring, and hands-on career development
  • provide deep technical guidance in a number of aspects of data engineering in a scalable ecosystem
  • champion cultural and process improvements through engineering excellence, quality and efficiency
  • work with close counterparts in other departments as part of a multi-functional team, and build this culture in your team
What we offer
What we offer
  • health coverage
  • paid volunteer days
  • wellness resources
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Inetum Polska is part of the global Inetum Group and plays a key role in driving...
Location
Location
Poland , Warsaw
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Expert-level proficiency in Databricks, Apache Spark, SQL, and Python/Scala
  • Extensive experience with cloud data platforms (AWS, Azure, or GCP) and big data technologies
  • Strong understanding of data architecture, data warehousing, and lakehouse concepts
  • Experience with real-time data processing and streaming technologies (Kafka, Delta Live Tables, Event Hubs)
  • Proficiency in automation, CI/CD, and Infrastructure as Code (Terraform, Bicep)
  • Leadership skills with the ability to drive strategic technical decisions
  • 6+ years of experience in data engineering, with a track record of designing and implementing complex data solutions
Job Responsibility
Job Responsibility
  • Architect and implement enterprise-grade data solutions using Databricks, Apache Spark, and cloud services
  • Lead data engineering initiatives, setting best practices and guiding technical decisions
  • Design, optimize, and scale data pipelines for performance, reliability, and cost efficiency
  • Define data governance policies and implement security best practices
  • Evaluate emerging technologies and recommend improvements to existing data infrastructure
  • Mentor junior and mid-level engineers, fostering a culture of continuous learning
  • Collaborate with cross-functional teams to align data strategy with business objectives
What we offer
What we offer
  • Flexible working hours
  • Hybrid work model
  • Cafeteria system
  • Generous referral bonuses
  • Additional revenue sharing opportunities
  • Ongoing guidance from a dedicated Team Manager
  • Tailored technical mentoring
  • Dedicated team-building budget
  • Opportunities to participate in charitable initiatives and local sports programs
  • Supportive and inclusive work culture
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Atlassian is looking for a Senior Data Engineer to join our Data Engineering tea...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • A BS in Computer Science or equivalent experience
  • At least 7+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer
  • Strong programming skills (Python, Java or Scala preferred)
  • Experience writing SQL, structuring data, and data storage practices
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience building data pipelines, platforms
  • Experience with Databricks, Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • Experience in modern software development practices (Agile, TDD, CICD)
  • Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies
Job Responsibility
Job Responsibility
  • Help our stakeholder teams ingest data faster into our data lake
  • Find ways to make our data pipelines more efficient
  • Come up with ideas to help instigate self-serve data engineering within the company
  • Apply your strong technical experience building highly reliable services on managing and orchestrating a multi-petabyte scale data lake
  • Take vague requirements and transform them into solid solutions
  • Solve challenging problems, where creativity is as crucial as your ability to write code and test cases
What we offer
What we offer
  • Health and wellbeing resources
  • Paid volunteer days
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer role driving Circle K's cloud-first strategy to unlock the ...
Location
Location
India , Gurugram
Salary
Salary:
Not provided
https://www.circlek.com Logo
Circle K
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree in Computer Engineering, Computer Science or related discipline
  • Master's Degree preferred
  • 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment
  • 5+ years of experience with setting up and operating data pipelines using Python or SQL
  • 5+ years of advanced SQL Programming: PL/SQL, T-SQL
  • 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization
  • Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads
  • 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data
  • 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions
  • 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring
Job Responsibility
Job Responsibility
  • Collaborate with business stakeholders and other technical team members to acquire and migrate data sources
  • Determine solutions that are best suited to develop a pipeline for a particular data source
  • Develop data flow pipelines to extract, transform, and load data from various data sources
  • Efficient in ETL/ELT development using Azure cloud services and Snowflake
  • Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines
  • Provide clear documentation for delivered solutions and processes
  • Identify and implement internal process improvements for data management
  • Stay current with and adopt new tools and applications
  • Build cross-platform data strategy to aggregate multiple sources
  • Proactive in stakeholder communication, mentor/guide junior resources
  • Fulltime
Read More
Arrow Right