CrawlJobs Logo

Gcp engineer with Bigquery, Pyspark

realign-llc.com Logo

Realign

Location Icon

Location:
United States , Phoenix

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

110000.00 USD / Year

Job Description:

Role - GCP engineer with Bigquery, Pyspark

Requirements:

  • 8+ years of professional experience as a Java Engineer
  • Strong knowledge of Java languages and web development frameworks like Spring, Hibernate, and Struts.
  • Expertise in developing web applications using front-end technologies (HTML, CSS, and JavaScript).
  • Develop and maintain SpringBoot applications using Java programming language.
  • Knowledge of RESTful web services and API development
  • Experience deploying microservice architecture, applications, and supporting services
  • Experience working on GCP application Migration for large enterprise
  • Familiar with software security best practices
  • Understanding of monitoring tools
  • Experience working within large-scale decoupled, service-oriented systems a plus
  • Strong analytical and problem-solving skills with organizational capabilities.
  • Familiarity with cloud technologies (Google Cloud).

Nice to have:

Experience working within large-scale decoupled, service-oriented systems

Additional Information:

Job Posted:
March 21, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Gcp engineer with Bigquery, Pyspark

Senior Big Data Engineer

The Big Data Engineer is a senior level position responsible for establishing an...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience in Big Data Engineering (PySpark)
  • Data Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from multiple sources
  • Big Data Infrastructure: Develop and manage large-scale data processing systems using frameworks like Apache Spark, Hadoop, and Kafka
  • Proficiency in programming languages like Python, or Scala
  • Strong expertise in data processing frameworks such as Apache Spark, Hadoop
  • Expertise in Data Lakehouse technologies (Apache Iceberg, Apache Hudi, Trino)
  • Experience with cloud data platforms like AWS (Glue, EMR, Redshift), Azure (Synapse), or GCP (BigQuery)
  • Expertise in SQL and database technologies (e.g., Oracle, PostgreSQL, etc.)
  • Experience with data orchestration tools like Apache Airflow or Prefect
  • Familiarity with containerization (Docker, Kubernetes) is a plus
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets
  • Fulltime
Read More
Arrow Right

Senior Big Data Engineer

The Big Data Engineer is a senior level position responsible for establishing an...
Location
Location
Canada , Mississauga
Salary
Salary:
94300.00 - 141500.00 USD / Year
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ Years of Experience in Big Data Engineering (PySpark)
  • Data Pipeline Development: Design, build, and maintain scalable ETL/ELT pipelines to ingest, transform, and load data from multiple sources
  • Big Data Infrastructure: Develop and manage large-scale data processing systems using frameworks like Apache Spark, Hadoop, and Kafka
  • Proficiency in programming languages like Python, or Scala
  • Strong expertise in data processing frameworks such as Apache Spark, Hadoop
  • Expertise in Data Lakehouse technologies (Apache Iceberg, Apache Hudi, Trino)
  • Experience with cloud data platforms like AWS (Glue, EMR, Redshift), Azure (Synapse), or GCP (BigQuery)
  • Expertise in SQL and database technologies (e.g., Oracle, PostgreSQL, etc.)
  • Experience with data orchestration tools like Apache Airflow or Prefect
  • Familiarity with containerization (Docker, Kubernetes) is a plus
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency
What we offer
What we offer
  • Well-being support
  • Growth opportunities
  • Work-life balance support
  • Fulltime
Read More
Arrow Right

GCP Data Engineer

We are seeking a GCP Data Engineer who will design, build, and operationalise cl...
Location
Location
India , Pune
Salary
Salary:
Not provided
vodafone.com Logo
Vodafone
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experienced in GCP tools including BigQuery, Data Fusion, Dataproc, Cloud Composer, Workflows, and Cloud Scheduler
  • Skilled in programming languages such as Python, Spark, PySpark, or Java
  • Knowledgeable in Apache Airflow, GCP Dataproc clusters, and Dataflow
  • Possess 2–4 years of overall experience, with at least 2–3 years working on cloud platforms such as GCP, AWS, or Azure
  • Preferably certified as a Google Cloud Professional Data Engineer
  • Hold a technical qualification such as B.E./B.Tech, BCA/MCA, or BSc/MSc in Computer Science
Job Responsibility
Job Responsibility
  • Build and operationalise data processing systems based on low‑level design requirements
  • Apply strong working knowledge of the Spark framework and hands‑on experience with Dataproc
  • Use GCP Data Fusion, BigQuery, Airflow, and related tools to support optimised and scalable development approaches
  • Apply cloud‑based data pipeline patterns and propose innovative solutions to navigate platform constraints
  • Design, test, and maintain data pipelines aligned with data modelling, data warehousing, and industry‑standard data manipulation techniques
  • Design, develop, and maintain programmes written in Python, Spark, Scala, Java, or related technologies
  • Contribute to organisational improvements through process adoption, resource optimisation, and the use of tools that enhance productivity and quality
  • Recommend approaches to improve data reliability, operational efficiency, and overall solution quality
Read More
Arrow Right

Senior Data Engineer

We are currently looking for a Data Engineer to join our fast-paced, data-driven...
Location
Location
United Kingdom , London
Salary
Salary:
550.00 - 650.00 GBP / Hour
dataidols.com Logo
Data Idols
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong Python and or Pyspark
  • Experience with cloud technologies such as GCP (BigQuery, Compute Engine, Kubernetes) and AWS (Redshift, EC2)
  • Experience building ETL/ELT pipelines and working with APIs or SFTP integrations
  • Understanding of data modelling, warehousing, and Big Data environments
  • Strong analytical and creative problem-solving skills
  • Ability to manage projects and collaborate effectively in a team
  • Experience creating util packages in Python
Job Responsibility
Job Responsibility
  • Building, operating, and optimising end-to-end ETL/ELT data pipelines using APIs, SFTP, and containerised orchestration tools
  • Developing scalable and well-structured data models that support commercial, programmatic, and affiliate revenue functions
  • Managing and improving complex data infrastructure that processes high-volume, multi-source Big Data
  • Creating, maintaining, and enhancing interactive dashboards that drive KPI-focused decision-making
  • Owning data quality, ensuring accuracy, consistency, and reliability across all core datasets
  • Analysing campaign, monetisation, and platform performance and providing actionable insights
  • Collaborating with Operations, Sales, Marketing, Finance, and Senior Analytics teams
  • Supporting strategic projects with advanced data modelling and insight generation
Read More
Arrow Right

Data Engineer

We are currently looking for a Data Engineer to join our fast-paced, data-driven...
Location
Location
United Kingdom , London
Salary
Salary:
Not provided
dataidols.com Logo
Data Idols
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong Python and or Pyspark
  • Experience with cloud technologies such as GCP (BigQuery, Compute Engine, Kubernetes) and AWS (Redshift, EC2)
  • Experience building ETL/ELT pipelines and working with APIs or SFTP integrations
  • Understanding of data modelling, warehousing, and Big Data environments
  • Strong analytical and creative problem-solving skills
  • Ability to manage projects and collaborate effectively in a team
  • Experience creating util packages in Python
Job Responsibility
Job Responsibility
  • Building, operating, and optimising end-to-end ETL/ELT data pipelines using APIs, SFTP, and containerised orchestration tools
  • Developing scalable and well-structured data models that support commercial, programmatic, and affiliate revenue functions
  • Managing and improving complex data infrastructure that processes high-volume, multi-source Big Data
  • Creating, maintaining, and enhancing interactive dashboards that drive KPI-focused decision-making
  • Owning data quality, ensuring accuracy, consistency, and reliability across all core datasets
  • Analysing campaign, monetisation, and platform performance and providing actionable insights
  • Collaborating with Operations, Sales, Marketing, Finance, and Senior Analytics teams
  • Supporting strategic projects with advanced data modelling and insight generation
Read More
Arrow Right

Python Data Engineer

We are currently looking for an Data Engineer to join our fast-paced, data-drive...
Location
Location
United Kingdom , London
Salary
Salary:
Not provided
dataidols.com Logo
Data Idols
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong Python and or Pyspark
  • Experience with cloud technologies such as GCP (BigQuery, Compute Engine, Kubernetes) and AWS (Redshift, EC2)
  • Experience building ETL/ELT pipelines and working with APIs or SFTP integrations
  • Understanding of data modelling, warehousing, and Big Data environments
  • Strong analytical and creative problem-solving skills
  • Ability to manage projects and collaborate effectively in a team
  • Experience creating util packages in Python
Job Responsibility
Job Responsibility
  • Building, operating, and optimising end-to-end ETL/ELT data pipelines using APIs, SFTP, and containerised orchestration tools
  • Developing scalable and well-structured data models that support commercial, programmatic, and affiliate revenue functions
  • Managing and improving complex data infrastructure that processes high-volume, multi-source Big Data
  • Creating, maintaining, and enhancing interactive dashboards that drive KPI-focused decision-making
  • Owning data quality, ensuring accuracy, consistency, and reliability across all core datasets
  • Analysing campaign, monetisation, and platform performance and providing actionable insights
  • Collaborating with Operations, Sales, Marketing, Finance, and Senior Analytics teams
  • Supporting strategic projects with advanced data modelling and insight generation
Read More
Arrow Right

GCP Data Engineer

We are seeking a skilled GCP Data Engineer to design, build, and operationalise ...
Location
Location
India , Pune
Salary
Salary:
Not provided
vodafone.com Logo
Vodafone
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Experienced in GCP tools including BigQuery, Data Fusion, Dataproc, Cloud Composer, Workflows, and Cloud Scheduler
  • Skilled in programming languages such as Python, Spark, PySpark, or Java
  • Knowledgeable in Apache Airflow, Dataproc clusters, and Dataflow
  • Possess 2–4 years of overall experience, including at least 2–3 years in cloud platforms (GCP/AWS/Azure)
  • Preferably certified as a Google Cloud Professional Data Engineer
  • Hold a relevant degree such as B.E./B.Tech, BCA/MCA, or BSc/MSc in Computer Science
Job Responsibility
Job Responsibility
  • Build and operationalise data processing systems based on detailed design specifications
  • Apply strong Spark knowledge and hands-on experience with Dataproc
  • familiarity with Dataflow is beneficial
  • Utilise GCP Data Fusion, BigQuery, Airflow, and related tools to deliver optimised data solutions
  • Apply cloud-based data pipeline patterns and contribute creative approaches to navigate platform limitations
  • Design, test, and maintain data pipelines following data modelling, warehousing, and manipulation standards
  • Design and develop programmes using languages such as Python, Spark, PySpark, Scala, or Java
  • Contribute to process adoption, resource and tool optimisation, and continuous quality uplift
  • Recommend improvements to enhance data reliability, operational efficiencies, and solution quality
Read More
Arrow Right

Senior Data Engineer

As a Senior Data Engineer at Rearc, you'll play a pivotal role in establishing a...
Location
Location
United States , New York
Salary
Salary:
160000.00 - 200000.00 USD / Year
rearc.io Logo
Rearc
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of professional experience in data engineering across modern cloud architectures and diverse data systems
  • Expertise in designing and implementing data warehouses and data lakes across modern cloud environments (e.g., AWS, Azure, or GCP), with experience in technologies such as Redshift, BigQuery, Snowflake, Delta Lake, or Iceberg
  • Strong Python experience for data engineering, including libraries like Pandas, PySpark, NumPy, or Dask
  • Hands-on experience with Spark and Databricks (highly desirable)
  • Experience building and orchestrating data pipelines using Airflow, Databricks, DBT, or AWS Glue
  • Strong SQL skills and experience with both SQL and NoSQL databases (PostgreSQL, DynamoDB, Redshift, Delta Lake, Iceberg)
  • Solid understanding of data architecture principles, data modeling, and best practices for scalable data systems
  • Experience with cloud provider services (AWS, Azure, or GCP) and comfort using command-line interfaces or SDKs as part of development workflows
  • Familiarity with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, ARM/Bicep, or AWS CDK
  • Excellent communication skills, able to explain technical concepts to technical and non-technical stakeholders
Job Responsibility
Job Responsibility
  • Provide strategic data engineering leadership by shaping the vision, roadmap, and technical direction of data initiatives to align with business goals
  • Architect and build scalable, reliable data solutions, including complex data pipelines and distributed systems, using modern frameworks and technologies (e.g., Spark, Kafka, Kubernetes, Databricks, DBT)
  • Drive innovation by evaluating, proposing, and adopting new tools, patterns, and methodologies that improve data quality, performance, and efficiency
  • Apply deep technical expertise in ETL/ELT design, data modeling, data warehousing, and workflow optimization to ensure robust, high-quality data systems
  • Collaborate across teams—partner with engineering, product, analytics, and customer stakeholders to understand requirements and deliver impactful, scalable solutions
  • Mentor and coach junior engineers, fostering growth, knowledge-sharing, and best practices within the data engineering team
  • Contribute to thought leadership through knowledge-sharing, writing technical articles, speaking at meetups or conferences, or representing the team in industry conversations
What we offer
What we offer
  • Health Benefits
  • Generous time away
  • Maternity and Paternity leave
  • Educational resources and reimbursements
  • 401(k) plan with a company contribution
  • Fulltime
Read More
Arrow Right