CrawlJobs Logo

Gcp engineer with Bigquery, Pyspark

realign-llc.com Logo

Realign

Location Icon

Location:
United States , Phoenix

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

110000.00 USD / Year

Job Description:

Role - GCP engineer with Bigquery, Pyspark

Requirements:

  • 8+ years of professional experience as a Java Engineer
  • Strong knowledge of Java languages and web development frameworks like Spring, Hibernate, and Struts.
  • Expertise in developing web applications using front-end technologies (HTML, CSS, and JavaScript).
  • Develop and maintain SpringBoot applications using Java programming language.
  • Knowledge of RESTful web services and API development
  • Experience deploying microservice architecture, applications, and supporting services
  • Experience working on GCP application Migration for large enterprise
  • Familiar with software security best practices
  • Understanding of monitoring tools
  • Experience working within large-scale decoupled, service-oriented systems a plus
  • Strong analytical and problem-solving skills with organizational capabilities.
  • Familiarity with cloud technologies (Google Cloud).

Nice to have:

Experience working within large-scale decoupled, service-oriented systems

Additional Information:

Job Posted:
March 21, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Gcp engineer with Bigquery, Pyspark

Senior Big Data Engineer

The Big Data Engineer is a senior level position responsible for establishing an...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience in Big Data Engineering (PySpark)
  • Data Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from multiple sources
  • Big Data Infrastructure: Develop and manage large-scale data processing systems using frameworks like Apache Spark, Hadoop, and Kafka
  • Proficiency in programming languages like Python, or Scala
  • Strong expertise in data processing frameworks such as Apache Spark, Hadoop
  • Expertise in Data Lakehouse technologies (Apache Iceberg, Apache Hudi, Trino)
  • Experience with cloud data platforms like AWS (Glue, EMR, Redshift), Azure (Synapse), or GCP (BigQuery)
  • Expertise in SQL and database technologies (e.g., Oracle, PostgreSQL, etc.)
  • Experience with data orchestration tools like Apache Airflow or Prefect
  • Familiarity with containerization (Docker, Kubernetes) is a plus
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets
  • Fulltime
Read More
Arrow Right

Senior Big Data Engineer

The Big Data Engineer is a senior level position responsible for establishing an...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience in Big Data Engineering (PySpark)
  • Data Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from multiple sources
  • Big Data Infrastructure: Develop and manage large-scale data processing systems using frameworks like Apache Spark, Hadoop, and Kafka
  • Proficiency in programming languages like Python, or Scala
  • Strong expertise in data processing frameworks such as Apache Spark, Hadoop
  • Expertise in Data Lakehouse technologies (Apache Iceberg, Apache Hudi, Trino)
  • Experience with cloud data platforms like AWS (Glue, EMR, Redshift), Azure (Synapse), or GCP (BigQuery)
  • Expertise in SQL and database technologies (e.g., Oracle, PostgreSQL, etc.)
  • Experience with data orchestration tools like Apache Airflow or Prefect
  • Familiarity with containerization (Docker, Kubernetes) is a plus
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency
What we offer
What we offer
  • Well-being support
  • Growth opportunities
  • Work-life balance support
  • Fulltime
Read More
Arrow Right
New

Cloud Engineer - GCP & Databricks

As a Cloud Engineer, you are passionate about experience innovation and eager to...
Location
Location
India , Bengaluru
Salary
Salary:
Not provided
valtech.com Logo
Valtech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of hands-on professional experience in cloud data engineering or GCP platform roles
  • Bachelor's degree in Computer Science, Engineering, Information Systems, or equivalent practical experience
  • BigQuery with advanced SQL, partitioning, clustering, and cost optimization
  • Cloud Storage, Cloud Functions, Cloud Run
  • Dataflow (Apache Beam) for batch and streaming pipelines
  • Cloud Composer / Airflow for orchestration
  • Pub/Sub for event-driven architectures
  • Vertex AI exposure for model serving or pipelines
  • IAM, VPC, organization policies, and security governance
  • Terraform for infrastructure as code
Job Responsibility
Job Responsibility
  • Design GCP-native architectures using BigQuery, Dataflow, Cloud Composer (Airflow), Pub/Sub, Cloud Storage, Vertex AI, and Cloud Run
  • Build and maintain batch and streaming data pipelines using medallion architecture (Bronze, Silver, Gold)
  • Implement infrastructure as code using Terraform
  • Manage deployments through CI/CD pipelines such as Cloud Build
  • Define and enforce GCP landing zone standards including IAM, VPC, Shared VPC, Private Service Connect, and organization policies
  • Build end-to-end Databricks Lakehouse solutions on GCP
  • Design Delta Lake tables with proper governance using Unity Catalog
  • Develop and optimise PySpark and SQL workloads for large-scale transformations
  • Configure Databricks clusters, job scheduling, autoscaling, and cost controls
  • Implement Databricks Workflows and Asset Bundles for orchestration and CI/CD
What we offer
What we offer
  • Flexibility, with remote and hybrid work options (country-dependent)
  • Career advancement, with international mobility and professional development programs
  • Learning and development, with access to cutting-edge tools, training and industry experts
  • Fulltime
Read More
Arrow Right

GCP Data Engineer

We are seeking a GCP Data Engineer who will design, build, and operationalise cl...
Location
Location
India , Pune
Salary
Salary:
Not provided
vodafone.com Logo
Vodafone
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experienced in GCP tools including BigQuery, Data Fusion, Dataproc, Cloud Composer, Workflows, and Cloud Scheduler
  • Skilled in programming languages such as Python, Spark, PySpark, or Java
  • Knowledgeable in Apache Airflow, GCP Dataproc clusters, and Dataflow
  • Possess 2–4 years of overall experience, with at least 2–3 years working on cloud platforms such as GCP, AWS, or Azure
  • Preferably certified as a Google Cloud Professional Data Engineer
  • Hold a technical qualification such as B.E./B.Tech, BCA/MCA, or BSc/MSc in Computer Science
Job Responsibility
Job Responsibility
  • Build and operationalise data processing systems based on low‑level design requirements
  • Apply strong working knowledge of the Spark framework and hands‑on experience with Dataproc
  • Use GCP Data Fusion, BigQuery, Airflow, and related tools to support optimised and scalable development approaches
  • Apply cloud‑based data pipeline patterns and propose innovative solutions to navigate platform constraints
  • Design, test, and maintain data pipelines aligned with data modelling, data warehousing, and industry‑standard data manipulation techniques
  • Design, develop, and maintain programmes written in Python, Spark, Scala, Java, or related technologies
  • Contribute to organisational improvements through process adoption, resource optimisation, and the use of tools that enhance productivity and quality
  • Recommend approaches to improve data reliability, operational efficiency, and overall solution quality
Read More
Arrow Right

Senior Data Engineer

We are currently looking for a Data Engineer to join our fast-paced, data-driven...
Location
Location
United Kingdom , London
Salary
Salary:
550.00 - 650.00 GBP / Hour
dataidols.com Logo
Data Idols
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong Python and or Pyspark
  • Experience with cloud technologies such as GCP (BigQuery, Compute Engine, Kubernetes) and AWS (Redshift, EC2)
  • Experience building ETL/ELT pipelines and working with APIs or SFTP integrations
  • Understanding of data modelling, warehousing, and Big Data environments
  • Strong analytical and creative problem-solving skills
  • Ability to manage projects and collaborate effectively in a team
  • Experience creating util packages in Python
Job Responsibility
Job Responsibility
  • Building, operating, and optimising end-to-end ETL/ELT data pipelines using APIs, SFTP, and containerised orchestration tools
  • Developing scalable and well-structured data models that support commercial, programmatic, and affiliate revenue functions
  • Managing and improving complex data infrastructure that processes high-volume, multi-source Big Data
  • Creating, maintaining, and enhancing interactive dashboards that drive KPI-focused decision-making
  • Owning data quality, ensuring accuracy, consistency, and reliability across all core datasets
  • Analysing campaign, monetisation, and platform performance and providing actionable insights
  • Collaborating with Operations, Sales, Marketing, Finance, and Senior Analytics teams
  • Supporting strategic projects with advanced data modelling and insight generation
Read More
Arrow Right

Senior Data Engineer

Design and build scalable, governed data products aligned to data‑as‑a‑product s...
Location
Location
India , Bangalore; Hyderabad
Salary
Salary:
Not provided
berettaclima.it Logo
Beretta Clima Italia
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4 to 7 years of experience in Data Engineering / Data Platforms
  • Strong hands‑on experience with AWS and GCP
  • Proficiency in PySpark, SQL, Python
  • Experience with Airflow / Cloud Composer / MWAA
  • Solid understanding of lakehouse architecture and data modeling
  • Experience working with enterprise business processes (Finance, Supply Chain, Sales, etc.)
  • Strong communication and stakeholder collaboration skills
Job Responsibility
Job Responsibility
  • Design end‑to‑end data architectures aligned with UDA standards
  • Build and optimize batch and near‑real‑time data pipelines
  • Implement Bronze / Silver / Gold data models and curated data products
  • Engineer solutions on AWS (S3, Glue, MWAA, Athena/Redshift) and GCP (GCS, BigQuery, Dataproc, Dataflow)
  • Apply data governance, quality, lineage, and security controls
  • Partner with business and analytics teams to translate requirements into data solutions
  • Contribute to reusable patterns, lighthouse initiatives, and platform modernization
  • Mentor junior engineers and drive engineering best practices
What we offer
What we offer
  • Retirement savings plan
  • Health insurance
  • Flexible schedules
  • Parental leave
  • Holiday purchase scheme
  • Professional development opportunities
  • Employee Assistance Programme
  • Fulltime
Read More
Arrow Right

Data Engineer

We are currently looking for a Data Engineer to join our fast-paced, data-driven...
Location
Location
United Kingdom , London
Salary
Salary:
Not provided
dataidols.com Logo
Data Idols
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong Python and or Pyspark
  • Experience with cloud technologies such as GCP (BigQuery, Compute Engine, Kubernetes) and AWS (Redshift, EC2)
  • Experience building ETL/ELT pipelines and working with APIs or SFTP integrations
  • Understanding of data modelling, warehousing, and Big Data environments
  • Strong analytical and creative problem-solving skills
  • Ability to manage projects and collaborate effectively in a team
  • Experience creating util packages in Python
Job Responsibility
Job Responsibility
  • Building, operating, and optimising end-to-end ETL/ELT data pipelines using APIs, SFTP, and containerised orchestration tools
  • Developing scalable and well-structured data models that support commercial, programmatic, and affiliate revenue functions
  • Managing and improving complex data infrastructure that processes high-volume, multi-source Big Data
  • Creating, maintaining, and enhancing interactive dashboards that drive KPI-focused decision-making
  • Owning data quality, ensuring accuracy, consistency, and reliability across all core datasets
  • Analysing campaign, monetisation, and platform performance and providing actionable insights
  • Collaborating with Operations, Sales, Marketing, Finance, and Senior Analytics teams
  • Supporting strategic projects with advanced data modelling and insight generation
Read More
Arrow Right

Python Data Engineer

We are currently looking for an Data Engineer to join our fast-paced, data-drive...
Location
Location
United Kingdom , London
Salary
Salary:
Not provided
dataidols.com Logo
Data Idols
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong Python and or Pyspark
  • Experience with cloud technologies such as GCP (BigQuery, Compute Engine, Kubernetes) and AWS (Redshift, EC2)
  • Experience building ETL/ELT pipelines and working with APIs or SFTP integrations
  • Understanding of data modelling, warehousing, and Big Data environments
  • Strong analytical and creative problem-solving skills
  • Ability to manage projects and collaborate effectively in a team
  • Experience creating util packages in Python
Job Responsibility
Job Responsibility
  • Building, operating, and optimising end-to-end ETL/ELT data pipelines using APIs, SFTP, and containerised orchestration tools
  • Developing scalable and well-structured data models that support commercial, programmatic, and affiliate revenue functions
  • Managing and improving complex data infrastructure that processes high-volume, multi-source Big Data
  • Creating, maintaining, and enhancing interactive dashboards that drive KPI-focused decision-making
  • Owning data quality, ensuring accuracy, consistency, and reliability across all core datasets
  • Analysing campaign, monetisation, and platform performance and providing actionable insights
  • Collaborating with Operations, Sales, Marketing, Finance, and Senior Analytics teams
  • Supporting strategic projects with advanced data modelling and insight generation
Read More
Arrow Right