CrawlJobs Logo

Databricks Engineer- GCP Cloud

votredircom.fr Logo

Wissen

Location Icon

Location:
India , Bangalore South

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We are looking for a bright and dynamic engineer, motivated and able to work independently as well as in partnership with IT and Business teams spread across the globe. The candidate needs to be an exceptionally strong Python and SQL programmer with hands-on experience in GCP-native data technologies including BigQuery, Dataproc, Cloud Composer, and Datastream. Besides technical skills, we are looking for a candidate with a strong sense of ownership and the ability to work in a diverse, cross-functional team spanning Engineering, Research, DataOps, and Compliance.

Job Responsibility:

  • Build and maintain scalable, distributed, fault-tolerant data pipelines on GCP, including BigQuery-based lakehouse layers and Dataproc-driven Delta Lake workflows
  • Actively participate in meetings with various stakeholders across data engineering, compliance, and business teams globally
  • Understand market data processing and transformation needs
  • build pipelines to acquire, normalise, transform, and release large volumes of financial data through the OMDP data factory
  • Design and implement bitemporal data models (valid-time + system-time) on BigQuery to support certified, regulatory-grade time-series datasets
  • Build, use, and maintain software testing frameworks (unit / non-regression / user acceptance) for data pipelines and transformation logic
  • Take complete ownership of solutions and assigned tasks, including ingestion pipelines, QA workflows, correction management, and audit trail implementation.
  • Work in a collaborative manner with other team members and contribute to shared platform services rather than vertical-specific implementations
  • Have business acumen to understand financial concepts around reference data related to equities and other asset classes
  • Support teams across data and technology in implementing AI solutions and integrating their services with MSCI's data science products and platforms, including AI-assisted ingestion, anomaly detection, and semantic search over the lakehouse using Vertex AI

Requirements:

  • 6-8 years of experience in data engineering
  • Proficient in Python programming — data pipeline development, transformation logic, and automation scripts
  • Proficient in data query and analysis using SQL, with strong hands-on experience in BigQuery — partitioning, clustering, materialised views, and time-series query patterns at scale
  • Hands-on experience building and scheduling pipelines using Cloud Composer (Apache Airflow) — DAG authoring, SLA alerting, retry logic, and dependency management
  • Working knowledge of Dataproc (Apache Spark) — batch ingestion, Delta Lake merge operations, and incremental data processing
  • Proficient in AI-assisted development tools such as GitHub Copilot, Cursor, or others for accelerating code generation and enhancing developer productivity
  • Code versioning and collaboration using Git — branching strategies, pull request workflows, and pipeline-as-code practices
  • Familiarity with REST APIs — consuming external data vendor APIs and building service-layer integrations
  • Familiarity with GCP cloud technologies — Cloud Storage, Pub/Sub, Datastream, Cloud Monitoring, IAM, and VPC Service Controls

Nice to have:

  • Basic knowledge of data manipulation and analysis libraries — pandas, PySpark, or equivalent
  • Basic knowledge of columnar storage, SQL-based querying, and time-series analytics (ClickHouse or equivalent)
  • Familiarity with Dataplex for data discovery, lineage, policy tagging, and data quality rule management
  • Understanding of Change Data Capture (CDC) patterns using Datastream for replicating transactional data into BigQuery
  • Understanding of bitemporal data modeling concepts (valid-time and system-time) and the challenges of implementing them within BigQuery's append-optimised design
  • Understanding of financial reference data — equities, fixed income identifiers, corporate actions, or index composition data
  • Familiarity with BigQuery cost management — slot reservations, query cost controls, and workload isolation using reservations and assignments
  • Exposure to CI/CD pipelines and infrastructure-as-code using Terraform for data platform deployments on GCP
  • Prior experience or projects involving LLMs and Agentic AI — particularly using Vertex AI for AI-assisted data quality, anomaly detection, semantic search, or natural language querying over structured datasets — is a strong plus

Additional Information:

Job Posted:
April 20, 2026

Employment Type:
Fulltime
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Databricks Engineer- GCP Cloud

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • In-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • Expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • Advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • Advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • Solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • Expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake to ensure data quality, consistency, and historical tracking
  • Efficient implementation of the Lakehouse architecture on Databricks, combining best practices from DWH and Data Lake
  • Optimize Databricks clusters, Spark operations, and Delta tables to reduce latency and computational costs
  • Design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables
  • Implement and manage Unity Catalog for centralized data governance, data security and data lineage
  • Define and implement data quality standards and rules to maintain data integrity
  • Develop and manage complex workflows using Databricks Workflows or external tools to automate pipelines
  • Integrate Databricks pipelines into CI/CD processes
  • Work closely with Data Scientists, Analysts, and Architects to understand business requirements and deliver optimal technical solutions
What we offer
What we offer
  • Full access to foreign language learning platform
  • Personalized access to tech learning platforms
  • Tailored workshops and trainings to sustain your growth
  • Medical insurance
  • Meal tickets
  • Monthly budget to allocate on flexible benefit platform
  • Access to 7 Card services
  • Wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • in-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking
  • bachelor’s degree in Computer Science, Engineering, Mathematics, or a relevant technical field
  • minimum of 5+ years of experience in Data Engineering, with at least 3+ years of experience working with Databricks and Spark at scale.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake
  • design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables (DLT)
  • implement Unity Catalog for centralized data governance, fine-grained security (row/column-level security), and data lineage
  • develop and manage complex workflows using Databricks Workflows (Jobs) or external tools (Azure Data Factory, Airflow) to automate pipelines
  • integrate Databricks pipelines into CI/CD processes using tools like Git, Databricks Repos, and Bundles
  • work closely with Data Scientists, Analysts, and Architects to deliver optimal technical solutions
  • provide technical guidance and mentorship to junior developers.
What we offer
What we offer
  • Full access to foreign language learning platform
  • personalized access to tech learning platforms
  • tailored workshops and trainings to sustain your growth
  • medical insurance
  • meal tickets
  • monthly budget to allocate on flexible benefit platform
  • access to 7 Card services
  • wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Principal Consulting AI / Data Engineer

As a Principal Consulting AI / Data Engineer, you will design, build, and optimi...
Location
Location
Australia , Sydney
Salary
Salary:
Not provided
dyflex.com.au Logo
DyFlex Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven expertise in delivering enterprise-grade data engineering and AI solutions in production environments
  • Strong proficiency in Python and SQL, plus experience with Spark, Airflow, dbt, Kafka, or Flink
  • Experience with cloud platforms (AWS, Azure, or GCP) and Databricks
  • Ability to confidently communicate and present at C-suite level, simplifying technical concepts into business impact
  • Track record of engaging senior executives and influencing strategic decisions
  • Strong consulting and stakeholder management skills with client-facing experience
  • Background in MLOps, ML pipelines, or AI solution delivery highly regarded
  • Degree in Computer Science, Engineering, Data Science, Mathematics, or a related field
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable data and AI solutions using Databricks, cloud platforms, and modern frameworks
  • Lead solution architecture discussions with clients, ensuring alignment of technical delivery with business strategy
  • Present to and influence executive-level stakeholders, including boards, C-suite, and senior directors
  • Translate highly technical solutions into clear business value propositions for non-technical audiences
  • Mentor and guide teams of engineers and consultants to deliver high-quality solutions
  • Champion best practices across data engineering, MLOps, and cloud delivery
  • Build DyFlex’s reputation as a trusted partner in Data & AI through thought leadership and client advocacy
What we offer
What we offer
  • Work with SAP’s latest technologies on cloud as S/4HANA, BTP and Joule, plus Databricks, ML/AI tools and cloud platforms
  • A flexible and supportive work environment including work from home
  • Competitive remuneration and benefits including novated lease, birthday leave, salary packaging, wellbeing programme, additional purchased leave, and company-provided laptop
  • Comprehensive training budget and paid certifications (Databricks, SAP, cloud platforms)
  • Structured career advancement pathways with opportunities to lead large-scale client programs
  • Exposure to diverse industries and client environments, including executive-level engagement
  • Fulltime
Read More
Arrow Right

Cloud Engineer - GCP & Databricks

As a Cloud Engineer, you are passionate about experience innovation and eager to...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
valtech.com Logo
Valtech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of hands-on professional experience in cloud data engineering or GCP platform roles
  • Bachelor's degree in Computer Science, Engineering, Information Systems, or equivalent practical experience
  • BigQuery with advanced SQL, partitioning, clustering, and cost optimization
  • Cloud Storage, Cloud Functions, Cloud Run
  • Dataflow (Apache Beam) for batch and streaming pipelines
  • Cloud Composer / Airflow for orchestration
  • Pub/Sub for event-driven architectures
  • Vertex AI exposure for model serving or pipelines
  • IAM, VPC, organization policies, and security governance
  • Terraform for infrastructure as code
Job Responsibility
Job Responsibility
  • Design GCP-native architectures using BigQuery, Dataflow, Cloud Composer (Airflow), Pub/Sub, Cloud Storage, Vertex AI, and Cloud Run
  • Build and maintain batch and streaming data pipelines using medallion architecture (Bronze, Silver, Gold)
  • Implement infrastructure as code using Terraform
  • Manage deployments through CI/CD pipelines such as Cloud Build
  • Define and enforce GCP landing zone standards including IAM, VPC, Shared VPC, Private Service Connect, and organization policies
  • Build end-to-end Databricks Lakehouse solutions on GCP
  • Design Delta Lake tables with proper governance using Unity Catalog
  • Develop and optimise PySpark and SQL workloads for large-scale transformations
  • Configure Databricks clusters, job scheduling, autoscaling, and cost controls
  • Implement Databricks Workflows and Asset Bundles for orchestration and CI/CD
What we offer
What we offer
  • Flexibility, with remote and hybrid work options (country-dependent)
  • Career advancement, with international mobility and professional development programs
  • Learning and development, with access to cutting-edge tools, training and industry experts
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Our client in the financial services space are looking for a Senior Data Enginee...
Location
Location
United Kingdom
Salary
Salary:
Not provided
weareorbis.com Logo
Orbis Consultants
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Actuarial and Asset Management Excel Files
  • Investment & Financial domain knowledge: Have worked within the operations team for an asset management department and understands assets data, like financial securities
  • SQL: Data manipulation and querying
  • Python: Experience in libraries like Pandas and NumPy
  • Cloud Platform: Building on GCP using BigQuery
  • Familiarity with tools like DBT and Databricks
  • Experience with investment-related projects or working in the reinsurance or life insurance domain
Read More
Arrow Right

Vice President Databricks Practice

We are seeking a dynamic and results-driven Vice President Databricks Practice t...
Location
Location
United States , Frisco
Salary
Salary:
Not provided
worldlink-us.com Logo
WorldLink
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3-5 years of current experience using Databricks platform
  • Bachelor's degree and/or equivalent experience required
  • Databricks certification(s)
  • Prior experience working with Databricks in a similar role
  • Proven track record of Databricks implementation, including solutions selling and upscaling products
  • Proven success delivering solutions in collaboration with stakeholders and high performing teams
  • Strong project management experience in developing and executing strategic plans that drive growth and revenue
  • Self-motivated individual with the ability to thrive in a team-based or independent environment
  • Detail-oriented with excellent organizational skills and a methodical approach to solutions selling
  • Ability to work in a fast-paced environment
Job Responsibility
Job Responsibility
  • Define and execute the vision and go-to-market strategy for developing our Databricks practice
  • Build and scale service offerings that integrate Databricks with data, AI, and cloud transformation solutions
  • Develop repeatable solutions, accelerators, and workshops, and formulate joint GTM strategies
  • Serve as a trusted advisor to C-suite and client executives, shaping their data, analytics, and AI initiatives using Databricks
  • Articulate WorldLink’s industry value proposition, incorporating Databricks elements, and ability to develop efficient implementation using Databricks
  • Strengthen and expand our alliance with Databricks, cloud hyperscalers (AWS, Azure, GCP), and complementary technology partners
  • Drive revenue growth through client acquisition, expansion of existing accounts, and joint GTM with Databricks and cloud partners
  • Collaborate with a high-performing team of data engineers, AI/ML engineers, architects, and consultants
  • Foster a culture of innovation, continuous learning, and delivery excellence
  • Possess continued interest in expanding organizational expertise, selling solutions and upscaling products using the Databricks platform
What we offer
What we offer
  • Medical Plans
  • Dental Plans
  • Vision Plan
  • Life & Accidental Death & Dismemberment
  • Short-Term Disability
  • Long-Term Disability
  • Critical Illness/ Accident/ Hospital Indemnity/ Identity Theft Protection
  • 401(k)
  • Fulltime
Read More
Arrow Right

Data Engineer

We are looking for an experienced Data Engineer with deep expertise in Databrick...
Location
Location
Salary
Salary:
Not provided
coherentsolutions.com Logo
Coherent Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field
  • 5+ years of experience in data engineering, with at least 2 years of hands-on experience with Databricks (including Spark, Delta Lake, and MLflow)
  • Strong proficiency in Python and/or Scala for data processing
  • Deep understanding of distributed data processing, data warehousing, and ETL concepts
  • Experience with cloud data platforms (Azure Data Lake, AWS S3, or Google Cloud Storage)
  • Solid knowledge of SQL and experience with large-scale relational and NoSQL databases
  • Familiarity with CI/CD, DevOps, and infrastructure-as-code practices for data engineering
  • Experience with data governance, security, and compliance in cloud environments
  • Excellent problem-solving, communication, and leadership skills
  • English: Upper Intermediate level or higher
Job Responsibility
Job Responsibility
  • Lead the design, development, and deployment of scalable data pipelines and ETL processes using Databricks (Spark, Delta Lake, MLflow)
  • Architect and implement data lakehouse solutions, ensuring data quality, governance, and security
  • Optimize data workflows for performance and cost efficiency on Databricks and cloud platforms (Azure, AWS, or GCP)
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights
  • Mentor and guide junior engineers, promoting best practices in data engineering and Databricks usage
  • Develop and maintain documentation, data models, and technical standards
  • Monitor, troubleshoot, and resolve issues in production data pipelines and environments
  • Stay current with emerging trends and technologies in data engineering and Databricks ecosystem
What we offer
What we offer
  • Technical and non-technical training for professional and personal growth
  • Internal conferences and meetups to learn from industry experts
  • Support and mentorship from an experienced employee to help you professional grow and development
  • Internal startup incubator
  • Health insurance
  • English courses
  • Sports activities to promote a healthy lifestyle
  • Flexible work options, including remote and hybrid opportunities
  • Referral program for bringing in new talent
  • Work anniversary program and additional vacation days
Read More
Arrow Right

Associate Data Engineer

This role offers a unique opportunity to work at the intersection of a pioneerin...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
blenheimchalcot.com Logo
Blenheim Chalcot
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 1–2 years of experience in data engineering or related fields
  • Strong foundational knowledge of data engineering principles
  • Proficiency in Python and SQL
  • Familiarity with Databricks (preferred)
  • Experience with PySpark and Google Cloud Platform (GCP)
  • Excellent problem-solving and communication skills
  • Ability to work independently and take ownership of tasks
Job Responsibility
Job Responsibility
  • Support the development, optimization, and maintenance of scalable data pipelines
  • Collaborate with cross-functional teams to ensure data integrity and accessibility
  • Assist in data ingestion, transformation, and integration from various sources
  • Contribute to documentation and best practices for data engineering workflows
  • Participate in code reviews and continuous improvement initiatives
What we offer
What we offer
  • Be part of the UK's Leading Digital Venture Builder
  • Opportunity to learn from and collaborate with diverse talent across BC
  • Exposure to GenAI-enabled ventures and cutting-edge technologies
  • A fun and open, cricket-obsessed atmosphere – we own the Rajasthan Royals IPL team
  • 24 days of annual leave & 10 public holiday days
  • Private Medical for you and your immediate family & Life Insurance for yourself
Read More
Arrow Right