CrawlJobs Logo

Gcp Data Analyst - Bigquery

rackspace.com Logo

Rackspace

Location Icon

Location:
United States , San Antonio

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

143700.00 - 245520.00 USD / Year

Job Description:

We’re seeking a GCP Data Analyst with deep expertise in BigQuery, strong SQL and Python skills, and a sharp analytical mindset to support both data validation initiatives and ongoing analytics work. This role is ideal for someone who can navigate large datasets, build robust queries, and identify inconsistencies with precision and insight. The analyst will work across a variety of data workflows, from validating metrics during system migrations to supporting day-to-day data analysis and reporting needs.

Job Responsibility:

  • Write, optimize, and execute complex SQL queries in BigQuery to validate data accuracy, identify inconsistencies, and support analytics and reporting
  • Analyze large datasets to assess data quality, compare trends across systems, and surface anomalies or unexpected behaviors
  • Utilize advanced BigQuery features such as authorized views, materialized views, UDFs, partitioned tables, and joins to support scalable, high-performance analysis
  • Use Python (including data frames and relevant libraries) for exploratory analysis, data manipulation, and supporting validation workflows
  • Support time series analysis and, where applicable, anomaly detection using SQL or Python-based approaches
  • Assist with load/transform validation to ensure reliability and accuracy in data pipelines
  • Collaborate with engineering teams to understand data pipelines, with basic ability to read and interpret Java or Scala code when needed
  • Perform side-by-side comparisons of data across systems to ensure consistency during and after migrations
  • Maintain basic familiarity with orchestration tools such as Airflow (Composer) to follow pipeline logic and collaborate effectively with engineering
  • Work within the GCP environment, leveraging cloud tools and services to support analysis, troubleshoot issues, and navigate cloud-based workflows
  • Clearly communicate analytical findings and data quality issues to cross-functional stakeholders to support decision-making

Requirements:

  • Bachelor’s degree in Computer Science, Data Science, Engineering, or a related field
  • 5+ years of experience in data analyst or analytics engineering roles with strong BigQuery, SQL, and Python skills
  • 5+ years of experience building and operating solutions on Google Cloud Platform (GCP)
  • Strong ability to write and optimize SQL queries to validate data, analyze trends, and detect inconsistencies
  • Proficient in Python, including use of data frames and common analytical libraries
  • Experience with advanced BigQuery features such as authorized views, materialized views, UDFs, partitions, and time series analysis
  • Strong analytical skills and experience validating data across systems during migrations and ongoing operations
  • Basic ability to read and understand Java or Scala code to support engineering collaboration
  • Familiarity with Airflow (Cloud Composer) to interpret and trace data pipeline workflows
  • Proficiency in SQL, BigQuery, and Python
  • Advanced SQL skills in BigQuery for complex data validation, anomaly detection, and trend analysis
  • Experience comparing datasets across systems
  • Proven ability to identify and investigate data discrepancies across platforms
  • Strong analytical intuition to sense-check metrics and flag issues that may not trigger formal alerts
  • Ability to perform side-by-side metric and trend comparisons to confirm post-migration accuracy
  • Skilled in root cause analysis using SQL, domain expertise, and supporting context
  • Effective communicator who can document findings and share insights with both technical and non-technical stakeholders
  • Familiarity with time series analysis to detect unexpected shifts, drops, or spikes in metrics
  • Ability to follow structured validation processes while proactively identifying workflow improvements

Nice to have:

  • Familiarity with Looker or other BI tools for metric validation and reporting support
  • BigQuery ML and Vertex AI
  • Basic familiarity with legacy systems such as Oozie or Pig for reading existing scripts

Additional Information:

Job Posted:
January 05, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Gcp Data Analyst - Bigquery

Gcp Data Engineer

We at AlgebraIT are looking for a GCP Data Engineer with 3+ years of experience ...
Location
Location
United States , Austin
Salary
Salary:
Not provided
algebrait.com Logo
AlgebraIT
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience in data engineering with GCP
  • Proficiency in Python, SQL, and GCP services
  • Experience with data pipeline orchestration tools
  • Strong problem-solving abilities and attention to detail
  • Bachelor’s degree in Computer Science or related field
Job Responsibility
Job Responsibility
  • Build and maintain scalable data pipelines using GCP tools
  • Ensure data security and governance
  • Monitor, troubleshoot, and optimize data workflows
  • Collaborate with stakeholders to gather requirements and deliver data solutions
  • Implement data quality checks and best practices
  • Develop and maintain ETL processes
  • Create detailed documentation of data processes
  • Work closely with data analysts and business teams for data alignment
  • Ensure high availability and reliability of data services
  • Stay current with GCP data technology advancements
  • Fulltime
Read More
Arrow Right

Data Analyst

Location
Location
India , Pune
Salary
Salary:
Not provided
enormousenterprise.com Logo
Enormous Enterprise
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years’ experience in related field preferred
  • Experience with other tools such as GCP BQ, Alteryx, Tableau required
  • Experience with data query, visualization, dashboard and/or scorecard tools preferred
  • Extensive experience performing Analysis and Solutioning
  • Experience in Big Data systems, Data Lakes and Analytics related field preferred
  • Hands-on experience in Data Analysis/validation/data mapping and proficient in SQL queries
  • Experience developing Business/Functional Requirement documentation and facilitating requirement work sessions with Business Units/Clients
  • Excellent verbal/written communication skills in an internal client facing role with ability to present information and findings clearly in analysis/reports
  • Can work on complex projects of large scope
Job Responsibility
Job Responsibility
  • Scopes, designs, and develops data analytics solutions/models to solve challenging business problems using Google BigQuery, SQL, SAP, Tableau, AtScale, Looker Studio, Alteryx, and other data-centric technologies
  • Proficiency in data modeling/design and reading/writing/debugging/optimizing advanced SQL queries are firmly required
  • Knowledge of data visualization, dashboard, reporting, and OLAP/dimensional design tools preferred
  • Translates complex business requirements into technical requirements, and ultimately into data solutions that enable the business to make fast, effective, data-driven decisions (BigQuery data models/views, AtScale virtual cubes, Tableau dashboards
  • supports decision-making, metric management, enterprise performance management)
  • Asks probing questions to understand detailed business processes, requirements, and data architectures
  • Brings a foundation of data analysis experience and best-practices to discover and anticipate requirements beyond those that are plainly provided Observes patterns in processes and data and implementing end-to-end data driven intelligent processes
  • Extracts, manipulates, and analyzes varying types of data to find insight with a high degree of accuracy and attention to detail
  • Integrates analysis and underlying work (e.g., data mapping, transformations, validation) back into preexisting data platforms, with an enterprise mentality beyond any single use case
  • Applies hypotheses and an understanding of cause-and-effect from the logical analysis of a complex situation
Read More
Arrow Right

Senior Data Engineer

Adtalem is a data driven organization. The Data Engineering team builds data sol...
Location
Location
United States , Lisle
Salary
Salary:
84835.61 - 149076.17 USD / Year
adtalem.com Logo
Adtalem Global Education
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field.
  • Master's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field.
  • Two (2) plus years experience in Google cloud with services like BigQuery, Composer, GCS, DataStream, Dataflows, BQML, Vertex AI.
  • Six (6) plus years experience in data engineering solutions such as data platforms, ingestion, data management, or publication/analytics.
  • Hands-on experience working with real-time, unstructured, and synthetic data.
  • Experience in Real Time Data ingestion using GCP PubSub, Kafka, Spark or similar.
  • Expert knowledge on Python programming and SQL.
  • Experience with cloud platforms (AWS, GCP, Azure) and their data services
  • Experience working with Airflow as workflow management tools and build operators to connect, extract and ingest data as needed.
  • Familiarity with synthetic data generation and unstructured data processing
Job Responsibility
Job Responsibility
  • Architect, develop, and optimize scalable data pipelines handling real-time, unstructured, and synthetic datasets
  • Collaborate with cross-functional teams, including data scientists, analysts, and product owners, to deliver innovative data solutions that drive business growth.
  • Design, develop, deploy and support high performance data pipelines both inbound and outbound.
  • Model data platform by applying the business logic and building objects in the semantic layer of the data platform.
  • Leverage streaming technologies and cloud platforms to enable real-time data processing and analytics
  • Optimize data pipelines for performance, scalability, and reliability.
  • Implement CI/CD pipelines to ensure continuous deployment and delivery of our data products.
  • Ensure quality of critical data elements, prepare data quality remediation plans and collaborate with business and system owners to fix the quality issues at its root.
  • Document the design and support strategy of the data pipelines
  • Capture, store and socialize data lineage and operational metadata
What we offer
What we offer
  • Health, dental, vision, life and disability insurance
  • 401k Retirement Program + 6% employer match
  • Participation in Adtalem’s Flexible Time Off (FTO) Policy
  • 12 Paid Holidays
  • Eligible to participate in an annual incentive program
  • Fulltime
Read More
Arrow Right
New

Gcp data analyst - bigquery

We’re seeking a GCP Data Analyst with deep expertise in BigQuery, strong SQL and...
Location
Location
Canada , Toronto; Vancouver; Calgary
Salary
Salary:
143700.00 - 245520.00 USD / Year
rackspace.com Logo
Rackspace
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor’s degree in Computer Science, Data Science, Engineering, or a related field
  • 5+ years of experience in data analyst or analytics engineering roles with strong BigQuery, SQL, and Python skills
  • 5+ years of experience building and operating solutions on Google Cloud Platform (GCP)
  • Strong ability to write and optimize SQL queries to validate data, analyze trends, and detect inconsistencies
  • Proficient in Python, including use of data frames and common analytical libraries
  • Experience with advanced BigQuery features such as authorized views, materialized views, UDFs, partitions, and time series analysis
  • Strong analytical skills and experience validating data across systems during migrations and ongoing operations
  • Basic ability to read and understand Java or Scala code to support engineering collaboration
  • Familiarity with Airflow (Cloud Composer) to interpret and trace data pipeline workflows
  • Proficiency in SQL, BigQuery, and Python
Job Responsibility
Job Responsibility
  • Write, optimize, and execute complex SQL queries in BigQuery to validate data accuracy, identify inconsistencies, and support analytics and reporting
  • Analyze large datasets to assess data quality, compare trends across systems, and surface anomalies or unexpected behaviors
  • Utilize advanced BigQuery features such as authorized views, materialized views, UDFs, partitioned tables, and joins to support scalable, high-performance analysis
  • Use Python (including data frames and relevant libraries) for exploratory analysis, data manipulation, and supporting validation workflows
  • Support time series analysis and, where applicable, anomaly detection using SQL or Python-based approaches
  • Assist with load/transform validation to ensure reliability and accuracy in data pipelines
  • Collaborate with engineering teams to understand data pipelines, with basic ability to read and interpret Java or Scala code when needed
  • Perform side-by-side comparisons of data across systems to ensure consistency during and after migrations
  • Maintain basic familiarity with orchestration tools such as Airflow (Composer) to follow pipeline logic and collaborate effectively with engineering
  • Work within the GCP environment, leveraging cloud tools and services to support analysis, troubleshoot issues, and navigate cloud-based workflows
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Adtalem is a data driven organization. The Data Engineering team builds data sol...
Location
Location
United States , Lisle
Salary
Salary:
85000.00 - 150000.00 USD / Year
adtalem.com Logo
Adtalem Global Education
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree Computer Science, Computer Engineering, Software Engineering, or another related technical field Required
  • Master's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field Preferred
  • 2+ years experience in Google cloud with services like BigQuery, Composer, GCS, DataStream, Dataflows Required
  • 6+ years experience in data engineering solutions such as data platforms, ingestion, data management, or publication/analytics Required
  • Expert knowledge on SQL and Python programming
  • Experience working with Airflow as workflow management tools and build operators to connect, extract and ingest data as needed
  • Experience in tuning queries for performance and scalability
  • Experience in Real Time Data ingestion using GCP PubSub, Kafka, Spark or similar
  • Excellent organizational, prioritization and analytical abilities
  • Have proven experience working in incremental execution through successful launches
Job Responsibility
Job Responsibility
  • Work closely with various business, IT, Analyst and Data Science groups to collect business requirements
  • Design, develop, deploy and support high performance data pipelines both inbound and outbound
  • Model data platform by applying the business logic and building objects in the semantic layer of the data platform
  • Optimize data pipelines for performance, scalability, and reliability
  • Implement CI/CD pipelines to ensure continuous deployment and delivery of our data products
  • Ensure quality of critical data elements, prepare data quality remediation plans and collaborate with business and system owners to fix the quality issues at its root
  • Document the design and support strategy of the data pipelines
  • Capture, store and socialize data lineage and operational metadata
  • Troubleshoot and resolve data engineering issues as they arise
  • Develop REST APIs to expose data to other teams within the company
What we offer
What we offer
  • Health, dental, vision, life and disability insurance
  • 401k Retirement Program + 6% employer match
  • Participation in Adtalem’s Flexible Time Off (FTO) Policy
  • 12 Paid Holidays
  • Fulltime
Read More
Arrow Right

D&T Head of Data, BI & Analytics

The Head of Data Sciences, Analytics, AI, and BI is a senior strategic and opera...
Location
Location
Salary
Salary:
Not provided
aramex.com Logo
Aramex
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Leadership Behaviors: Collaborate & break silos
  • Execution & Accountability
  • Growth mindset
  • Innovation
Job Responsibility
Job Responsibility
  • Develop and communicate a long-term vision and strategy for Data Sciences, Analytics, AI, and BI, aligned with Aramex’s business strategy and growth objectives.
  • Drive innovation in analytics and AI, exploring emerging technologies, algorithms and methodologies to enhance business decision-making and operational efficiency.
  • Proactively engage business stakeholders to understand complex requirements, desired outcomes, and strategic priorities.
  • Design and deliver fit-for-purpose analytics and AI solutions that optimize operations, improve customer satisfaction, and drive revenue.
  • Use insights from customer feedback, usage analytics, and emerging trends to identify continuous improvement opportunities.
  • Own end to end operations of enterprise analytics platforms, including GCP-based ecosystems (BigQuery, Dataflow, Pub/Sub, Looker) and legacy BI systems (SQL Server, Oracle BI, ETL pipelines).
  • Ensure platform stability, performance, security and compliance through robust ITIL-aligned processes (incident, problem, change, release management).
  • Define and maintain SLAs, SLOs, and operational KPIs for uptime, pipeline reliability, cost efficiency, and user satisfaction.
  • Lead post-project operationalization, including service acceptance, monitoring, and continuous improvement loops.
  • Lead and mentor a high performing team of data scientists, AI engineers, BI analysts, and data operations engineers.
Read More
Arrow Right

Data Engineering & Analytics Lead

Premium Health is seeking a highly skilled, hands-on Data Engineering & Analytic...
Location
Location
United States , Brooklyn
Salary
Salary:
Not provided
premiumhealth.org Logo
Premium Health
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree preferred
  • Proven track record and progressively responsible experience in data engineering, data architecture, or related technical roles
  • healthcare experience preferred
  • Strong knowledge of data engineering principles, data integration, ETL processes, and semantic mapping techniques and best practices
  • Experience implementing data quality management processes, data governance frameworks, cataloging, and master data management concepts
  • Familiarity with healthcare data standards (e.g., HL7, FHIR, etc), health information management principles, and regulatory requirements (e.g., HIPAA)
  • Understanding of healthcare data, including clinical, operational, and financial data models, preferred
  • Advanced proficiency in SQL, data modeling, database design, optimization, and performance tuning
  • Experience designing and integrating data from disparate systems into harmonized data models or semantic layers
  • Hands-on experience with modern cloud-based data platforms (e.g Azure, AWS, GCP)
Job Responsibility
Job Responsibility
  • Collaborate with the CDIO and Director of Technology to define a clear data vision aligned with the organization's goals and execute the enterprise data roadmap
  • Serve as a thought leader for data engineering and analytics, guiding the evolution of our data ecosystem and championing data-driven decision-making across the organization
  • Build and mentor a small data team, providing technical direction and performance feedback, fostering best practices and continuous learning, while remaining a hands-on implementor
  • Define and implement best practices, standards, and processes for data engineering, analytics, and data management across the organization
  • Design, implement, and maintain a scalable, reliable, and high-performing modern data infrastructure, aligned with the organizational needs and industry best practices
  • Architect and maintain data lake/lakehouse, warehouse, and related platform components to support analytics, reporting, and operational use cases
  • Establish and enforce data architecture standards, governance models, naming conventions ,and documentation
  • Develop, optimize, and maintain scalable ETL/ELT pipelines and data workflows to collect, transform, normalize, and integrate data from diverse systems
  • Implement robust data quality processes, validation, monitoring, and error-handling frameworks
  • Ensure data is accurate, timely, secure, and ready for self-service analytics and downstream applications
What we offer
What we offer
  • Paid Time Off, Medical, Dental and Vision plans, Retirement plans
  • Public Service Loan Forgiveness (PSLF)
  • Fulltime
Read More
Arrow Right

Data Test Engineer

We are looking for a skilled Data Test Engineer who can design, build, and valid...
Location
Location
India , Chennai
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years of experience in Data Engineering and Data/ETL Testing
  • Strong expertise in writing and optimizing SQL queries (joins, subqueries, window functions, performance tuning)
  • Proficiency in Python or PySpark for data transformation and automation
  • Hands-on experience with ETL tools such as Azure Data Factory, Talend, SSIS, or Informatica
  • Familiarity with cloud platforms, preferably Azure
  • AWS or GCP is a plus
  • Experience working with data lakes, data warehouses (Snowflake, BigQuery, Redshift), and modern data platforms
  • Knowledge of version control systems (Git), issue tracking tools (JIRA), and Agile methodologies
  • Exposure to data testing frameworks like Great Expectations, DBT tests, or custom validation tools
  • Experience integrating data testing into CI/CD pipelines
Job Responsibility
Job Responsibility
  • Design, develop, and maintain robust ETL/ELT pipelines to process large volumes of structured and unstructured data using Azure Data Factory, PySpark, and SQL-based tools
  • Collaborate with data architects and analysts to understand transformation requirements and implement business rules correctly
  • Develop and execute complex SQL queries to validate, transform, and performance-tune data workflows
  • Perform rigorous data validation including source-to-target mapping (S2T), data profiling, reconciliation, and transformation rule testing
  • Conduct unit, integration, regression, and performance testing for data pipelines and storage layers
  • Automate data quality checks using Python and frameworks like Great Expectations, DBT, or custom-built tools
  • Monitor data pipeline health and implement observability through logging, alerting, and dashboards
  • Integrate testing into CI/CD workflows using tools like Azure DevOps, Jenkins, or GitHub Actions
  • Troubleshoot and resolve data quality issues, schema changes, and pipeline failures
  • Ensure compliance with data privacy, security, and governance policies
What we offer
What we offer
  • Competitive salary aligned with industry standards
  • Hands-on experience with enterprise-scale data platforms and cloud-native tools
  • Opportunities to work on data-centric initiatives across AI, analytics, and enterprise transformation
  • Access to internal learning accelerators, mentorship, and career growth programs
  • Flexible work culture, wellness initiatives, and comprehensive health benefits
  • Fulltime
Read More
Arrow Right