CrawlJobs Logo

GCP Engineer

nttdata.com Logo

NTT DATA

Location Icon

Location:
India , Noida

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

The GCP Engineer role at NTT DATA involves managing PostgreSQL and MongoDB databases in cloud environments, ensuring security, and optimizing performance.

Job Responsibility:

  • Managing PostgreSQL and MongoDB databases in cloud environments
  • Ensuring security
  • Optimizing performance
  • Providing 24*7 shift hours support at L1/L2 level
  • Updating KB articles, Problem Management articles, and SOPs/runbooks
  • Delivering timely and outstanding customer service
  • Sharing domain and technical expertise, providing technical mentorship and cross-training to other peers and team members
  • Working directly with end customer, business stakeholders as well as technical resources

Requirements:

  • 3+ years of overall operational experience
  • 2+ years of GCP experience as a cloud DBA (PostgreSQL/Mongo DB)
  • 3+ years of experience working in diverse cloud support database environments in a 24*7 production support model
  • Hands-on experience with PostgreSQL/MongoDB, including installation, configuration, performance tuning, and troubleshooting
  • Demonstrated expertise in managing PostgreSQL databases on AZURE, GCP and AWS RDS
  • Experience with features such as automated backups, maintenance, and scaling - PostgreSQL
  • Ability to Analyze and optimize complex SQL queries for performance improvement
  • Proficiency in setting up and managing monitoring tools for PostgreSQL on GCP
  • Experience with configuring alerts based on performance metrics
  • Experience in implementing and testing backup and recovery strategies for PostgreSQL databases on AWS RDS/AZURE SQL/GCP Cloud SQL
  • Knowledge and experience in designing and implementing disaster recovery plans for PostgreSQL databases on AWS RDS/AZURE SQL/GCP Cloud SQL
  • Good Understanding of database security principles and best practices
  • Proven ability to identify and resolve performance bottlenecks in PostgreSQL databases
  • Experience in optimizing database configurations for better performance
  • Able to provide 24*7 shift hours support at L1/L2 level
  • Experience in updating KB articles, Problem Management articles, and SOPs/runbooks
  • Passion for delivering timely and outstanding customer service
  • Great written and oral communication skills with internal and external customers
  • Strong ITIL foundation experience
  • Ability to work independently or no direct supervision
  • Share domain and technical expertise, providing technical mentorship and cross-training to other peers and team members
  • Work directly with end customer, business stakeholders as well as technical resources
  • Query fine tuning - MongoDB
  • Shell scripts for Monitoring like ‘slow queries’, replication lag, nodes fails, disk usage. etc
  • Backup and restores (Backups should be automated with shell scripts/Ops Manager)
  • Database Health check (Complete review of Database slow queries, fragmentation, index usage. etc)
  • Upgrades (Java version, Mongo version. etc)
  • Maintenance (Data Centre outages etc)
  • Architecture design as per the Application requirement
  • Writing best practices documents for shading, replication for Dev/App teams
  • Log rotation/ maintenance (mongos, MongoDB, config etc)
  • Segregation of duties (User Management – designing User roles and responsibilities)
  • Designing DR (Disaster Recovery)/COB (Continuity of Business) plans as applicable
  • Database Profiling, Locks, Memory Usage, No of connections, page fault etc.,
  • Export and Import of Data to and From MongoDB, Run time configuration of MongoDB,
  • Data Managements in MongoDB Capped Collections Expired data from TTL,
  • Monitoring of Various issues related with Database,
  • Monitoring at Server, Database, Collection Level, and Various Monitoring Tools related to MongoDB,
  • Database software Installation and Configuration in accordance with Client defined standards.
  • Database Migrations and Updates
  • Capacity management- MongoDB
  • Hands on experience in Server Performance tuning and Recommendations
  • High availability solutions and recommendations
  • Hands on experience in Root cause analysis for business impacting issues.
  • Experience with SQL, SQL Developer, TOAD, Pgadmin, MongoDB atlas
  • Experience with python / PowerShell scripting - preferred
  • Secondary skill in MySQL/oracle - preferred
  • Installation, configuration and upgrading of PostgreSQL server software and related products

Nice to have:

  • Experience with python / PowerShell scripting
  • Secondary skill in MySQL/oracle
  • Azure Database Certification (DP-300)
  • AWS Certified Database Specialty
  • PostgreSQL certification
  • MongoDB certification

Additional Information:

Job Posted:
March 19, 2026

Employment Type:
Fulltime
Work Type:
Hybrid work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for GCP Engineer

Senior Data & AI/ML Engineer - GCP Specialization Lead

We are on a bold mission to create the best software services offering in the wo...
Location
Location
United States , Menlo Park
Salary
Salary:
Not provided
techjays.com Logo
techjays
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • GCP Services: BigQuery, Dataflow, Pub/Sub, Vertex AI
  • ML Engineering: End-to-end ML pipelines using Vertex AI / Kubeflow
  • Programming: Python & SQL
  • MLOps: CI/CD for ML, Model deployment & monitoring
  • Infrastructure-as-Code: Terraform
  • Data Engineering: ETL/ELT, real-time & batch pipelines
  • AI/ML Tools: TensorFlow, scikit-learn, XGBoost
  • Min Experience: 10+ Years
Job Responsibility
Job Responsibility
  • Design and implement data architectures for real-time and batch pipelines, leveraging GCP services such as BigQuery, Dataflow, Dataproc, Pub/Sub, Vertex AI, and Cloud Storage
  • Lead the development of ML pipelines, from feature engineering to model training and deployment using Vertex AI, AI Platform, and Kubeflow Pipelines
  • Collaborate with data scientists to operationalize ML models and support MLOps practices using Cloud Functions, CI/CD, and Model Registry
  • Define and implement data governance, lineage, monitoring, and quality frameworks
  • Build and document GCP-native solutions and architectures that can be used for case studies and specialization submissions
  • Lead client-facing PoCs or MVPs to showcase AI/ML capabilities using GCP
  • Contribute to building repeatable solution accelerators in Data & AI/ML
  • Work with the leadership team to align with Google Cloud Partner Program metrics
  • Mentor engineers and data scientists toward achieving GCP certifications, especially in Data Engineering and Machine Learning
  • Organize and lead internal GCP AI/ML enablement sessions
What we offer
What we offer
  • Best in class packages
  • Paid holidays and flexible paid time away
  • Casual dress code & flexible working environment
  • Medical Insurance covering self & family up to 4 lakhs per person
Read More
Arrow Right

Gcp Data Engineer

We at AlgebraIT are looking for a GCP Data Engineer with 3+ years of experience ...
Location
Location
United States , Austin
Salary
Salary:
Not provided
algebrait.com Logo
AlgebraIT
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience in data engineering with GCP
  • Proficiency in Python, SQL, and GCP services
  • Experience with data pipeline orchestration tools
  • Strong problem-solving abilities and attention to detail
  • Bachelor’s degree in Computer Science or related field
Job Responsibility
Job Responsibility
  • Build and maintain scalable data pipelines using GCP tools
  • Ensure data security and governance
  • Monitor, troubleshoot, and optimize data workflows
  • Collaborate with stakeholders to gather requirements and deliver data solutions
  • Implement data quality checks and best practices
  • Develop and maintain ETL processes
  • Create detailed documentation of data processes
  • Work closely with data analysts and business teams for data alignment
  • Ensure high availability and reliability of data services
  • Stay current with GCP data technology advancements
  • Fulltime
Read More
Arrow Right

Senior DevOps Engineer (GCP)

Our client is a global UK-based financial services and investment banking organi...
Location
Location
Salary
Salary:
Not provided
n-ix.com Logo
N-iX
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience in DevOps, Cloud Engineering, or SRE roles
  • Strong hands-on experience with Google Cloud Platform, including: GKE / Kubernetes, Cloud Run, Cloud Functions, Pub/Sub, Cloud Storage, VPC, IAM, networking, security
  • Expertise in Terraform, Helm, or other IaC tools
  • Experience building CI/CD pipelines (GitHub Actions, GitLab CI, CircleCI, Jenkins, etc.)
  • Strong understanding of containerization and orchestration: Docker, Kubernetes
  • Solid experience with monitoring, observability, and logging stacks
  • Familiarity with networking, load balancing, security hardening, and zero-trust principles
  • Experience supporting production systems in high-availability, distributed environments
  • Strong scripting skills (Python, Bash, or similar)
  • Experience working with agile engineering teams
Job Responsibility
Job Responsibility
  • Design, implement, and maintain cloud infrastructure on Google Cloud (GKE, Cloud Run, Cloud Functions, Pub/Sub, Cloud Storage)
  • Build and optimize CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins, or similar)
  • Develop infrastructure-as-code using Terraform or similar tools
  • Set up and maintain container orchestration (Kubernetes, GKE) and automated deployment workflows
  • Implement monitoring, alerting, and observability using tools such as Prometheus, Grafana, ELK/Elastic, Stackdriver, or OpenTelemetry
  • Ensure compliance with security and governance standards across all environments
  • Collaborate closely with engineering teams to ensure scalable, high-performance deployment architectures
  • Support AI/ML and GenAI workloads (Vertex AI pipelines, model hosting, GPU workloads, inference optimization)
  • Manage environment strategies, release pipelines, configuration management, and secrets management
  • Optimize cloud costs and recommend improvements for performance and reliability
What we offer
What we offer
  • Flexible working format - remote, office-based or flexible
  • A competitive salary and good compensation package
  • Personalized career growth
  • Professional development tools (mentorship program, tech talks and trainings, centers of excellence, and more)
  • Active tech communities with regular knowledge sharing
  • Education reimbursement
  • Memorable anniversary presents
  • Corporate events and team buildings
  • Other location-specific benefits
Read More
Arrow Right

GCP Cloud Engineer

Wissen Technology is hiring an experienced GCP Cloud Engineer to design, impleme...
Location
Location
India , Mumbai | Pune
Salary
Salary:
Not provided
votredircom.fr Logo
Wissen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions)
  • Experience with GKE, Docker, GKE Networking, Helm
  • Hands-on experience with Azure DevOps for CI/CD pipeline automation
  • Expertise in Terraform for provisioning cloud resources
  • Proficiency in Python, Bash, or PowerShell for automation
  • Knowledge of cloud security principles, IAM, and compliance standards
  • Work Experience Min 8
  • Work Experience Max 15
Job Responsibility
Job Responsibility
  • Architect, deploy, and maintain GCP cloud resources via Terraform or other automation tools
  • Implement Google Cloud Storage, Cloud SQL, and Filestore for data storage and processing needs
  • Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability
  • Optimize resource allocation, monitoring, and cost efficiency across GCP environments
  • Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE)
  • Work with Helm charts for microservices deployments
  • Automate scaling, rolling updates, and zero-downtime deployments
  • Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads
  • Optimize containerized applications running on Cloud Run for cost efficiency and performance
  • Design, implement, and manage CI/CD pipelines using Azure DevOps
  • Fulltime
Read More
Arrow Right

Senior DevOps Engineer - AWS & GCP

We are seeking a passionate and experienced Senior DevOps Engineer to join our g...
Location
Location
India , Ahmedabad; Pune
Salary
Salary:
Not provided
techholding.co Logo
Tech Holding
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of professional experience in DevOps
  • Strong hands-on experience with Terraform, Kubernetes, and GCP
  • Solid programming experience in Python or Go
  • Proficiency with CI/CD tools (e.g., Jenkins, GitLab CI, GitHub Actions, etc.)
  • Hands-on experience with monitoring and alerting tools such as Prometheus, Grafana, ELK, or similar
  • Deep understanding of networking, security, and system administration
  • A passion for developer productivity and building internal tools that empower engineering teams
  • Excellent communication, collaboration, and problem-solving skills
  • A positive and proactive approach to work and team dynamics
  • Bachelor’s degree in Computer Science, Engineering, or a related technical field
Job Responsibility
Job Responsibility
  • Architect, build, and manage scalable, secure, and resilient infrastructure on AWS and Google Cloud Platform (GCP)
  • Automate infrastructure provisioning using Terraform
  • Manage containerized workloads with Kubernetes
  • Develop and enhance CI/CD pipelines to support fast and reliable software delivery
  • Implement monitoring and alerting solutions to ensure system performance and reliability
  • Write scripts and tools in Python or Go to streamline infrastructure operations and support developer workflows
  • Collaborate with engineering teams to improve developer experience and productivity through tooling and automation
  • Participate in troubleshooting, root cause analysis, and performance tuning
  • Ensure adherence to security, compliance, and operational best practices across environments
What we offer
What we offer
  • A culture that values flexibility, work-life balance, and employee well-being - including Work From Home Fridays
  • Competitive compensation packages and comprehensive health benefits
  • Work with a collaborative, global team of engineers who thrive on solving complex challenges
  • Exposure to multi-cloud environments (AWS, GCP, Azure) and modern DevOps tooling at scale
  • Professional growth through continuous learning, mentorship, and access to new technologies
  • Leadership that recognizes contributions and supports career advancement
  • The chance to shape DevOps best practices and directly influence company-wide engineering culture
  • A people-first environment where your ideas matter and innovation is encouraged
  • Fulltime
Read More
Arrow Right

Lead Software Engineer Scientific Engine

Lead Software Engineer to manage a team of 4. As team lead, you will oversee: Th...
Location
Location
France , Paris
Salary
Salary:
Not provided
descartesunderwriting.com Logo
Descartes Underwriting
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 1 year or more of technical management experience
  • Handling human interactions between tech and business
  • Experience mentoring a team of software engineers by unblocking complex situations and sharing best practices (code reviews, pair programming..)
  • Scoping and defining tech priorities according to roadmap and maintenance
  • Excellent communication skills, in both formal and informal settings, and in English and French
  • 3 years of experiences as a software engineer or data scientist
  • Solid knowledge in Python
  • Solid engineering background: master in computer science, mathematics, physics or earth science
  • Experience optimizing and profiling python code
  • Experience in a microservices architecture
Job Responsibility
Job Responsibility
  • Contribute directly on the code base either individually, in pairs or more
  • Organize REX sessions to share the knowledge received with the rest of the team
  • Ensure compliance to internal standards and practices
  • Present the progress and goals
  • Contribute to the technical roadmap through architecture meetings, design documents
  • Lead & coach your engineer team to consistently deliver according to their roadmap
  • Provide expertise to help your team: Develop, optimize and update software for: Calculation of risk models
  • Data collection, preparation and visualization
  • Export of outputs adapted to users
  • Testing and validation of existing solutions
What we offer
What we offer
  • Opportunity to work and learn with teams from the most prestigious schools and research labs in the world
  • Commitment from Descartes to its staff of continued learning and development (think annual seminars, training etc.)
  • Work in a collaborative & professional environment
  • Be part of an international team, passionate about diversity
  • Join a company with a true purpose – help us help our clients be more resilient towards climate risks
  • A competitive salary, bonus and benefits
  • You can benefit from a punctual home office days
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer to design, develop, and optimize data platforms, pipelines,...
Location
Location
United States , Chicago
Salary
Salary:
160555.00 - 176610.00 USD / Year
adtalem.com Logo
Adtalem Global Education
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master's degree in Engineering Management, Software Engineering, Computer Science, or a related technical field
  • 3 years of experience in data engineering
  • Experience building data platforms and pipelines
  • Experience with AWS, GCP or Azure
  • Experience with SQL and Python for data manipulation, transformation, and automation
  • Experience with Apache Airflow for workflow orchestration
  • Experience with data governance, data quality, data lineage and metadata management
  • Experience with real-time data ingestion tools including Pub/Sub, Kafka, or Spark
  • Experience with CI/CD pipelines for continuous deployment and delivery of data products
  • Experience maintaining technical records and system designs
Job Responsibility
Job Responsibility
  • Design, develop, and optimize data platforms, pipelines, and governance frameworks
  • Enhance business intelligence, analytics, and AI capabilities
  • Ensure accurate data flows and push data-driven decision-making across teams
  • Write product-grade performant code for data extraction, transformations, and loading (ETL) using SQL/Python
  • Manage workflows and scheduling using Apache Airflow and build custom operators for data ETL
  • Build, deploy and maintain both inbound and outbound data pipelines to integrate diverse data sources
  • Develop and manage CI/CD pipelines to support continuous deployment of data products
  • Utilize Google Cloud Platform (GCP) tools, including BigQuery, Composer, GCS, DataStream, and Dataflow, for building scalable data systems
  • Implement real-time data ingestion solutions using GCP Pub/Sub, Kafka, or Spark
  • Develop and expose REST APIs for sharing data across teams
What we offer
What we offer
  • Health, dental, vision, life and disability insurance
  • 401k Retirement Program + 6% employer match
  • Participation in Adtalem’s Flexible Time Off (FTO) Policy
  • 12 Paid Holidays
  • Annual incentive program
  • Fulltime
Read More
Arrow Right

Intermediate / Senior Software Engineer Scientific Engine (Python)

Due to our consistent growth, we are seeking to expand our Data, Software and De...
Location
Location
France , Paris
Salary
Salary:
Not provided
descartesunderwriting.com Logo
Descartes Underwriting
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Coaching or mentoring experience
  • Scoping and identifying solutions with business team
  • Handling human interactions between tech and business
  • Excellent communication skills, in both formal and informal settings, and in English and French
  • 3 years or more of experiences as a software engineer or data scientist
  • Solid knowledge in Python
  • Solid engineering background: master in computer science, mathematics, physics or earth science
  • Experience optimizing and profiling python code
  • Experience in a microservices architecture
  • Good knowledge with Docker
Job Responsibility
Job Responsibility
  • Contribute directly on the code base either individually, in pairs or more
  • Organize REX sessions to share the knowledge received with the rest of the team
  • Ensure compliance to internal standards and practices
  • Present the progress and goals
  • Contribute to the technical roadmap through architecture meetings, design documents
  • Coach your collaborators to consistently deliver according to their roadmap
  • Provide expertise to help your team: Develop, optimize and update software for: Calculation of risk models
  • Data collection, preparation and visualization
  • Export of outputs adapted to users
  • Testing and validation of existing solutions
What we offer
What we offer
  • Opportunity to work and learn with teams from the most prestigious schools and research labs in the world, allowing you to progress towards technical excellence
  • Commitment from Descartes to its staff of continued learning and development (think annual seminars, training etc.)
  • Work in a collaborative & professional environment
  • Be part of an international team, passionate about diversity
  • Join a company with a true purpose – help us help our clients be more resilient towards climate risks
  • A competitive salary, bonus and benefits
  • You can benefit from a punctual home office days
Read More
Arrow Right