CrawlJobs Logo

Cloud Engineer - GCP & Databricks

valtech.com Logo

Valtech

Location Icon

Location:
India , Bengaluru

Category Icon

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

Not provided

Job Description:

As a Cloud Engineer, you are passionate about experience innovation and eager to push the boundaries of what’s possible. You bring 6+ YEARS of experience, a growth mindset and a drive to make a lasting impact.

Job Responsibility:

  • Design GCP-native architectures using BigQuery, Dataflow, Cloud Composer (Airflow), Pub/Sub, Cloud Storage, Vertex AI, and Cloud Run
  • Build and maintain batch and streaming data pipelines using medallion architecture (Bronze, Silver, Gold)
  • Implement infrastructure as code using Terraform
  • Manage deployments through CI/CD pipelines such as Cloud Build
  • Define and enforce GCP landing zone standards including IAM, VPC, Shared VPC, Private Service Connect, and organization policies
  • Build end-to-end Databricks Lakehouse solutions on GCP
  • Design Delta Lake tables with proper governance using Unity Catalog
  • Develop and optimise PySpark and SQL workloads for large-scale transformations
  • Configure Databricks clusters, job scheduling, autoscaling, and cost controls
  • Implement Databricks Workflows and Asset Bundles for orchestration and CI/CD
  • Advise clients on Databricks adoption and migrations from legacy or on-prem platforms
  • Lead technical workshops and requirements-gathering sessions
  • Create client-facing deliverables such as architecture diagrams, specifications, runbooks, and data dictionaries
  • Present solution designs and progress updates to both technical and business stakeholders
  • Manage technical risks, dependencies, and delivery issues
  • Actively participate in Agile ceremonies and sprint delivery
  • Implement data quality checks using tools such as Great Expectations, dbt tests, or Delta constraints
  • Support metadata management and data catalogue initiatives using Dataplex or Unity Catalog
  • Ensure GDPR and data residency requirements are embedded in solution design
  • Support RFP and RFI responses with solution design and effort estimation
  • Contribute to internal documentation, runbooks, and knowledge-sharing sessions
  • Stay up to date with GCP and Databricks product developments and recommend new capabilities to clients

Requirements:

  • 6+ years of hands-on professional experience in cloud data engineering or GCP platform roles
  • Bachelor's degree in Computer Science, Engineering, Information Systems, or equivalent practical experience
  • BigQuery with advanced SQL, partitioning, clustering, and cost optimization
  • Cloud Storage, Cloud Functions, Cloud Run
  • Dataflow (Apache Beam) for batch and streaming pipelines
  • Cloud Composer / Airflow for orchestration
  • Pub/Sub for event-driven architectures
  • Vertex AI exposure for model serving or pipelines
  • IAM, VPC, organization policies, and security governance
  • Terraform for infrastructure as code
  • Cloud Build and Artifact Registry for CI/CD
  • Looker or Looker Studio for analytics and reporting
  • Strong PySpark skills using DataFrame APIs and performance optimisation techniques
  • Delta Lake concepts including ACID transactions, time travel, and Z-ordering
  • Unity Catalog for governance and access control
  • Databricks Workflows and job orchestration
  • Databricks on GCP cluster configuration and tuning
  • MLflow for experiment tracking and model registry
  • dbt on Databricks or BigQuery
  • Databricks Asset Bundles and CI/CD integration
  • Python for data engineering and automation
  • Advanced SQL for analytics
  • Git-based version control and pull request workflows
  • Docker and basic containerisation concepts
  • Unit testing and data pipeline testing strategies
  • REST API integration experience
  • Google Cloud Professional Data Engineer or Professional Cloud Architect certification
  • Databricks Certified Associate Developer for Apache Spark or Databricks Certified Data Engineer certification

Nice to have:

  • Experience with AWS or Azure in addition to GCP
  • Exposure to composable commerce or content platforms
  • Salesforce Marketing Cloud or CDP integrations
  • Knowledge of Generative AI and LLM integrations using Vertex AI or Gemini APIs
  • Databricks Agent Framework or LangChain experience
  • Kafka or Confluent Cloud
  • Experience in luxury, retail, or FMCG domains
What we offer:
  • Flexibility, with remote and hybrid work options (country-dependent)
  • Career advancement, with international mobility and professional development programs
  • Learning and development, with access to cutting-edge tools, training and industry experts

Additional Information:

Job Posted:
May 05, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Cloud Engineer - GCP & Databricks

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • In-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • Expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • Advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • Advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • Solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • Expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake to ensure data quality, consistency, and historical tracking
  • Efficient implementation of the Lakehouse architecture on Databricks, combining best practices from DWH and Data Lake
  • Optimize Databricks clusters, Spark operations, and Delta tables to reduce latency and computational costs
  • Design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables
  • Implement and manage Unity Catalog for centralized data governance, data security and data lineage
  • Define and implement data quality standards and rules to maintain data integrity
  • Develop and manage complex workflows using Databricks Workflows or external tools to automate pipelines
  • Integrate Databricks pipelines into CI/CD processes
  • Work closely with Data Scientists, Analysts, and Architects to understand business requirements and deliver optimal technical solutions
What we offer
What we offer
  • Full access to foreign language learning platform
  • Personalized access to tech learning platforms
  • Tailored workshops and trainings to sustain your growth
  • Medical insurance
  • Meal tickets
  • Monthly budget to allocate on flexible benefit platform
  • Access to 7 Card services
  • Wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Senior Databricks Data Engineer

To develop, implement, and optimize complex Data Warehouse (DWH) and Data Lakeho...
Location
Location
Romania , Bucharest
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven, expert-level experience with the entire Databricks ecosystem (Workspace, Cluster Management, Notebooks, Databricks SQL)
  • in-depth knowledge of Spark architecture (RDD, DataFrames, Spark SQL) and advanced optimization techniques
  • expertise in implementing and managing Delta Lake (ACID properties, Time Travel, Merge, Optimize, Vacuum)
  • advanced/expert-level proficiency in Python (with PySpark) and/or Scala (with Spark)
  • advanced/expert-level skills in SQL and Data Modeling (Dimensional, 3NF, Data Vault)
  • solid experience with a major Cloud platform (AWS, Azure, or GCP), especially with storage services (S3, ADLS Gen2, GCS) and networking
  • bachelor’s degree in Computer Science, Engineering, Mathematics, or a relevant technical field
  • minimum of 5+ years of experience in Data Engineering, with at least 3+ years of experience working with Databricks and Spark at scale.
Job Responsibility
Job Responsibility
  • Design and implement robust, scalable, and high-performance ETL/ELT data pipelines using PySpark/Scala and Databricks SQL on the Databricks platform
  • expertise in implementing and optimizing the Medallion architecture (Bronze, Silver, Gold) using Delta Lake
  • design and implement real-time/near-real-time data processing solutions using Spark Structured Streaming and Delta Live Tables (DLT)
  • implement Unity Catalog for centralized data governance, fine-grained security (row/column-level security), and data lineage
  • develop and manage complex workflows using Databricks Workflows (Jobs) or external tools (Azure Data Factory, Airflow) to automate pipelines
  • integrate Databricks pipelines into CI/CD processes using tools like Git, Databricks Repos, and Bundles
  • work closely with Data Scientists, Analysts, and Architects to deliver optimal technical solutions
  • provide technical guidance and mentorship to junior developers.
What we offer
What we offer
  • Full access to foreign language learning platform
  • personalized access to tech learning platforms
  • tailored workshops and trainings to sustain your growth
  • medical insurance
  • meal tickets
  • monthly budget to allocate on flexible benefit platform
  • access to 7 Card services
  • wellbeing activities and gatherings.
  • Fulltime
Read More
Arrow Right

Principal Consulting AI / Data Engineer

As a Principal Consulting AI / Data Engineer, you will design, build, and optimi...
Location
Location
Australia , Sydney
Salary
Salary:
Not provided
dyflex.com.au Logo
DyFlex Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven expertise in delivering enterprise-grade data engineering and AI solutions in production environments
  • Strong proficiency in Python and SQL, plus experience with Spark, Airflow, dbt, Kafka, or Flink
  • Experience with cloud platforms (AWS, Azure, or GCP) and Databricks
  • Ability to confidently communicate and present at C-suite level, simplifying technical concepts into business impact
  • Track record of engaging senior executives and influencing strategic decisions
  • Strong consulting and stakeholder management skills with client-facing experience
  • Background in MLOps, ML pipelines, or AI solution delivery highly regarded
  • Degree in Computer Science, Engineering, Data Science, Mathematics, or a related field
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable data and AI solutions using Databricks, cloud platforms, and modern frameworks
  • Lead solution architecture discussions with clients, ensuring alignment of technical delivery with business strategy
  • Present to and influence executive-level stakeholders, including boards, C-suite, and senior directors
  • Translate highly technical solutions into clear business value propositions for non-technical audiences
  • Mentor and guide teams of engineers and consultants to deliver high-quality solutions
  • Champion best practices across data engineering, MLOps, and cloud delivery
  • Build DyFlex’s reputation as a trusted partner in Data & AI through thought leadership and client advocacy
What we offer
What we offer
  • Work with SAP’s latest technologies on cloud as S/4HANA, BTP and Joule, plus Databricks, ML/AI tools and cloud platforms
  • A flexible and supportive work environment including work from home
  • Competitive remuneration and benefits including novated lease, birthday leave, salary packaging, wellbeing programme, additional purchased leave, and company-provided laptop
  • Comprehensive training budget and paid certifications (Databricks, SAP, cloud platforms)
  • Structured career advancement pathways with opportunities to lead large-scale client programs
  • Exposure to diverse industries and client environments, including executive-level engagement
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Our client in the financial services space are looking for a Senior Data Enginee...
Location
Location
United Kingdom
Salary
Salary:
Not provided
weareorbis.com Logo
Orbis Consultants
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Actuarial and Asset Management Excel Files
  • Investment & Financial domain knowledge: Have worked within the operations team for an asset management department and understands assets data, like financial securities
  • SQL: Data manipulation and querying
  • Python: Experience in libraries like Pandas and NumPy
  • Cloud Platform: Building on GCP using BigQuery
  • Familiarity with tools like DBT and Databricks
  • Experience with investment-related projects or working in the reinsurance or life insurance domain
Read More
Arrow Right

Vice President Databricks Practice

We are seeking a dynamic and results-driven Vice President Databricks Practice t...
Location
Location
United States , Frisco
Salary
Salary:
Not provided
worldlink-us.com Logo
WorldLink
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3-5 years of current experience using Databricks platform
  • Bachelor's degree and/or equivalent experience required
  • Databricks certification(s)
  • Prior experience working with Databricks in a similar role
  • Proven track record of Databricks implementation, including solutions selling and upscaling products
  • Proven success delivering solutions in collaboration with stakeholders and high performing teams
  • Strong project management experience in developing and executing strategic plans that drive growth and revenue
  • Self-motivated individual with the ability to thrive in a team-based or independent environment
  • Detail-oriented with excellent organizational skills and a methodical approach to solutions selling
  • Ability to work in a fast-paced environment
Job Responsibility
Job Responsibility
  • Define and execute the vision and go-to-market strategy for developing our Databricks practice
  • Build and scale service offerings that integrate Databricks with data, AI, and cloud transformation solutions
  • Develop repeatable solutions, accelerators, and workshops, and formulate joint GTM strategies
  • Serve as a trusted advisor to C-suite and client executives, shaping their data, analytics, and AI initiatives using Databricks
  • Articulate WorldLink’s industry value proposition, incorporating Databricks elements, and ability to develop efficient implementation using Databricks
  • Strengthen and expand our alliance with Databricks, cloud hyperscalers (AWS, Azure, GCP), and complementary technology partners
  • Drive revenue growth through client acquisition, expansion of existing accounts, and joint GTM with Databricks and cloud partners
  • Collaborate with a high-performing team of data engineers, AI/ML engineers, architects, and consultants
  • Foster a culture of innovation, continuous learning, and delivery excellence
  • Possess continued interest in expanding organizational expertise, selling solutions and upscaling products using the Databricks platform
What we offer
What we offer
  • Medical Plans
  • Dental Plans
  • Vision Plan
  • Life & Accidental Death & Dismemberment
  • Short-Term Disability
  • Long-Term Disability
  • Critical Illness/ Accident/ Hospital Indemnity/ Identity Theft Protection
  • 401(k)
  • Fulltime
Read More
Arrow Right

Data Engineer

We are looking for an experienced Data Engineer with deep expertise in Databrick...
Location
Location
Salary
Salary:
Not provided
coherentsolutions.com Logo
Coherent Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field
  • 5+ years of experience in data engineering, with at least 2 years of hands-on experience with Databricks (including Spark, Delta Lake, and MLflow)
  • Strong proficiency in Python and/or Scala for data processing
  • Deep understanding of distributed data processing, data warehousing, and ETL concepts
  • Experience with cloud data platforms (Azure Data Lake, AWS S3, or Google Cloud Storage)
  • Solid knowledge of SQL and experience with large-scale relational and NoSQL databases
  • Familiarity with CI/CD, DevOps, and infrastructure-as-code practices for data engineering
  • Experience with data governance, security, and compliance in cloud environments
  • Excellent problem-solving, communication, and leadership skills
  • English: Upper Intermediate level or higher
Job Responsibility
Job Responsibility
  • Lead the design, development, and deployment of scalable data pipelines and ETL processes using Databricks (Spark, Delta Lake, MLflow)
  • Architect and implement data lakehouse solutions, ensuring data quality, governance, and security
  • Optimize data workflows for performance and cost efficiency on Databricks and cloud platforms (Azure, AWS, or GCP)
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights
  • Mentor and guide junior engineers, promoting best practices in data engineering and Databricks usage
  • Develop and maintain documentation, data models, and technical standards
  • Monitor, troubleshoot, and resolve issues in production data pipelines and environments
  • Stay current with emerging trends and technologies in data engineering and Databricks ecosystem
What we offer
What we offer
  • Technical and non-technical training for professional and personal growth
  • Internal conferences and meetups to learn from industry experts
  • Support and mentorship from an experienced employee to help you professional grow and development
  • Internal startup incubator
  • Health insurance
  • English courses
  • Sports activities to promote a healthy lifestyle
  • Flexible work options, including remote and hybrid opportunities
  • Referral program for bringing in new talent
  • Work anniversary program and additional vacation days
Read More
Arrow Right

Associate Data Engineer

This role offers a unique opportunity to work at the intersection of a pioneerin...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
blenheimchalcot.com Logo
Blenheim Chalcot
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 1–2 years of experience in data engineering or related fields
  • Strong foundational knowledge of data engineering principles
  • Proficiency in Python and SQL
  • Familiarity with Databricks (preferred)
  • Experience with PySpark and Google Cloud Platform (GCP)
  • Excellent problem-solving and communication skills
  • Ability to work independently and take ownership of tasks
Job Responsibility
Job Responsibility
  • Support the development, optimization, and maintenance of scalable data pipelines
  • Collaborate with cross-functional teams to ensure data integrity and accessibility
  • Assist in data ingestion, transformation, and integration from various sources
  • Contribute to documentation and best practices for data engineering workflows
  • Participate in code reviews and continuous improvement initiatives
What we offer
What we offer
  • Be part of the UK's Leading Digital Venture Builder
  • Opportunity to learn from and collaborate with diverse talent across BC
  • Exposure to GenAI-enabled ventures and cutting-edge technologies
  • A fun and open, cricket-obsessed atmosphere – we own the Rajasthan Royals IPL team
  • 24 days of annual leave & 10 public holiday days
  • Private Medical for you and your immediate family & Life Insurance for yourself
Read More
Arrow Right

Principal Machine Learning Systems Engineer

Search Platform powers the search functionality in Atlassian products. The team ...
Location
Location
Salary
Salary:
Not provided
https://www.atlassian.com Logo
Atlassian
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10+ years experience in multiple hands-on software/technology leadership roles, with end-to-end responsibility through the software development lifecycle
  • Worked on scaling ML use cases for 50+ TB of data
  • Good understanding of PySpark and Databricks jobs scaling challenges
  • Experience with ML workflows and observability at scale.
  • Bachelor's degree with a preference for Computer Science degree
  • Expertise with one or more prominent languages such as Java, Python, Kotlin, Go, or TypeScript is required.
  • Understanding of SaaS, PaaS, IaaS industry with hands-on experience with public cloud offerings (e.g., AWS, GCP, or Azure)
  • Java, Spring, REST, and NoSQL databases
  • Experience building event-driven based on SQS, SNS, Kafka or equivalent technologies
  • Knowledge to evaluate trade-offs between correctness, robustness, performance, space and time
Job Responsibility
Job Responsibility
  • Handle complex problems in the team from technical design to launch
  • Determine plans-of-attack on large projects
  • Solve complex architecture challenges and apply architectural standards and start using them on new projects
  • Lead code reviews & documentation and take on complex bug fixes, especially on high-risk problems
  • Set the standard for meaningful code reviews
  • Partner across engineering teams to take on company-wide programmes in multiple projects
  • Transfer your depth of knowledge from your current language to excel as a Software Engineer
  • Mentor junior members of the team
What we offer
What we offer
  • Atlassians can choose where they work – whether in an office, from home, or a combination of the two
  • health and wellbeing resources
  • paid volunteer days
Read More
Arrow Right