CrawlJobs Logo

Lead GCP Data Engineer

lingarogroup.com Logo

Lingaro

Location Icon

Location:
Mexico

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Responsibility:

  • Part of the team accountable for design, model and development of whole GCP data ecosystem for one of our Client’s (Cloud Storage, Cloud Functions, BigQuery)
  • Involvement throughout the whole process starting with the gathering, analyzing, modelling, and documenting business/technical requirements
  • Direct contact with clients
  • Modelling the data from various sources and technologies
  • Troubleshooting and supporting the most complex and high impact problems, to deliver new features and functionalities
  • Designing and optimizing data storage architectures, including data lakes, data warehouses, or distributed file systems
  • Implementing techniques like partitioning, compression, or indexing to optimize data storage and retrieval
  • Identifying and resolving bottlenecks, tuning queries, and implementing caching strategies to enhance data retrieval speed and overall system efficiency
  • Identifying and resolving issues related to data processing, storage, or infrastructure
  • Monitoring system performance, identifying anomalies, and conducting root cause analysis to ensure smooth and uninterrupted data operations
  • Train and mentor less experienced data engineers, providing guidance and knowledge transfer

Requirements:

  • At least 4 years of experience as a Data Engineer working with GCP cloud-based infrastructure & systems
  • Deep knowledge of Google Cloud Platform and cloud computing services
  • Extensive experience in design, build, and deploy data pipelines in the cloud, to ingest data from various sources like databases, APIs or streaming platforms
  • Proficient in database management systems such as SQL (Big Query is a must), NoSQL
  • Programming skills (SQL, Python, other scripting)
  • Proficient in data modeling techniques and database optimization
  • Knowledge of at least one orchestration and scheduling tool (Airflow is a must)
  • Experience with data integration tools and techniques, such as ETL and ELT
  • Knowledge of modern data transformation tools (such as DBT, Dataform)
  • Excellent communication skills
  • Advanced English level
  • Ability to actively participate/lead discussions with clients
  • Tools knowledge: Git, Jira, Confluence, etc.
  • Open to learn new technologies and solutions
  • Experience in multinational environment and distributed teams

Nice to have:

  • Certifications in big data technologies or/and cloud platforms
  • Experience with BI solutions (e.g. Looker, Power BI, Tableau)
  • Experience with ETL tools: e.g. Talend, Alteryx
  • Experience with Apache Spark, especially in GCP environment
  • Experience with Databricks
  • Experience with Azure cloud-based infrastructure & systems
What we offer:
  • Stable employment
  • Full-time position with work contract
  • Medical Insurance
  • Grocery Coupons
  • Saving fund
  • 30 days of Christmas bonus
  • Remote work bonus
  • Profit sharing
  • 50% vacation premium
  • Flexibility regarding working hours
  • Comprehensive online onboarding program with a “Buddy” from day 1
  • Cooperation with top-tier engineers and experts
  • Unlimited access to the Udemy learning platform from day 1
  • Certificate training programs
  • Upskilling support
  • Grow as we grow as a company
  • A diverse, inclusive, and values-driven community
  • Autonomy to choose the way you work
  • Create our community together
  • Activities to support your well-being and health
  • Plenty of opportunities to donate to charities and support the environment

Additional Information:

Job Posted:
January 05, 2026

Employment Type:
Fulltime
Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Lead GCP Data Engineer

Lead Data Engineer

We're looking for a Lead Data Engineer to build the data infrastructure that pow...
Location
Location
United States
Salary
Salary:
185000.00 - 225000.00 USD / Year
zora.co Logo
Zora
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience in data engineering, with at least 2 years in a technical leadership role
  • Strong proficiency in Python and SQL for building production data pipelines, complex data transformations and evolving data platforms, shared infrastructure, and internal tooling with engineering best practices.
  • Strong experience in designing, building, and maintaining cloud-based data pipelines using orchestration tools such as Airflow, Dagster, Prefect, Temporal, or similar.
  • Hands-on experience with cloud data platforms (AWS, GCP, or Azure) and modern data stack tools
  • Deep understanding of data warehousing concepts and experience with platforms like Snowflake, BigQuery, Redshift, or similar
  • Strong software engineering fundamentals including testing, CI/CD, version control, and writing maintainable, documented code
  • Track record of optimizing data systems for performance, reliability, and cost efficiency at scale
  • Excellent communication skills and ability to collaborate with cross-functional teams including product, engineering, and design
Job Responsibility
Job Responsibility
  • Design and build scalable data pipelines to ingest, process, and transform blockchain data, trading events, user activity, and market signals at high volume and low latency
  • Architect and maintain data infrastructure that powers real-time trading analytics, P&L calculations, leaderboards, market cap tracking, and liquidity monitoring across the platform
  • Own ETL/ELT processes that transform raw onchain data from multiple blockchains into clean, reliable, and performant datasets used by product, engineering, analytics, and ML teams
  • Build and optimize data models and schemas that support both operational systems (serving live trading data) and analytical use cases (understanding market dynamics and user behavior)
  • Establish data quality frameworks including monitoring, alerting, testing, and validation to ensure pipeline reliability and data accuracy at scale
  • Collaborate with backend engineers to design event schemas, data contracts, and APIs that enable real-time data flow between systems
  • Partner with product and analytics teams to understand data needs and translate them into robust engineering solutions
  • Provide technical leadership by mentoring engineers, conducting code reviews, establishing best practices, and driving architectural decisions for the data platform
  • Optimize performance and costs of data infrastructure as we scale to handle exponentially growing trading volumes
What we offer
What we offer
  • Remote-First Culture: Work from anywhere in the world!
  • Competitive Compensation: Including salary, pre-IPO stock options, token compensation, and additional financial incentives
  • Comprehensive Benefits: Robust healthcare options, including fully covered medical, dental, and vision for employees
  • Retirement Contributions: Up to 4% employer match on your 401(k) contributions
  • Health & Wellness: Free memberships to One Medical, Teladoc, and Health Advocate
  • Unlimited Time Off: Flexible vacation policies, company holidays, and recharge weeks to prioritize wellness
  • Home Office Reimbursement: To cover home office items, monthly home internet, and monthly cell phone (if applicable)
  • Ease of Life Reimbursement: To cover everything from an Uber home in the rain, childcare, or meal delivery
  • Career Development: Access to mentorship, training, and opportunities to grow your career
  • Inclusive Environment: A culture dedicated to diversity, equity, inclusion, and belonging
  • Fulltime
Read More
Arrow Right

Senior Data & AI/ML Engineer - GCP Specialization Lead

We are on a bold mission to create the best software services offering in the wo...
Location
Location
United States , Menlo Park
Salary
Salary:
Not provided
techjays.com Logo
techjays
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • GCP Services: BigQuery, Dataflow, Pub/Sub, Vertex AI
  • ML Engineering: End-to-end ML pipelines using Vertex AI / Kubeflow
  • Programming: Python & SQL
  • MLOps: CI/CD for ML, Model deployment & monitoring
  • Infrastructure-as-Code: Terraform
  • Data Engineering: ETL/ELT, real-time & batch pipelines
  • AI/ML Tools: TensorFlow, scikit-learn, XGBoost
  • Min Experience: 10+ Years
Job Responsibility
Job Responsibility
  • Design and implement data architectures for real-time and batch pipelines, leveraging GCP services such as BigQuery, Dataflow, Dataproc, Pub/Sub, Vertex AI, and Cloud Storage
  • Lead the development of ML pipelines, from feature engineering to model training and deployment using Vertex AI, AI Platform, and Kubeflow Pipelines
  • Collaborate with data scientists to operationalize ML models and support MLOps practices using Cloud Functions, CI/CD, and Model Registry
  • Define and implement data governance, lineage, monitoring, and quality frameworks
  • Build and document GCP-native solutions and architectures that can be used for case studies and specialization submissions
  • Lead client-facing PoCs or MVPs to showcase AI/ML capabilities using GCP
  • Contribute to building repeatable solution accelerators in Data & AI/ML
  • Work with the leadership team to align with Google Cloud Partner Program metrics
  • Mentor engineers and data scientists toward achieving GCP certifications, especially in Data Engineering and Machine Learning
  • Organize and lead internal GCP AI/ML enablement sessions
What we offer
What we offer
  • Best in class packages
  • Paid holidays and flexible paid time away
  • Casual dress code & flexible working environment
  • Medical Insurance covering self & family up to 4 lakhs per person
Read More
Arrow Right

Data Engineering & Analytics Lead

Premium Health is seeking a highly skilled, hands-on Data Engineering & Analytic...
Location
Location
United States , Brooklyn
Salary
Salary:
Not provided
premiumhealth.org Logo
Premium Health
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree preferred
  • Proven track record and progressively responsible experience in data engineering, data architecture, or related technical roles
  • healthcare experience preferred
  • Strong knowledge of data engineering principles, data integration, ETL processes, and semantic mapping techniques and best practices
  • Experience implementing data quality management processes, data governance frameworks, cataloging, and master data management concepts
  • Familiarity with healthcare data standards (e.g., HL7, FHIR, etc), health information management principles, and regulatory requirements (e.g., HIPAA)
  • Understanding of healthcare data, including clinical, operational, and financial data models, preferred
  • Advanced proficiency in SQL, data modeling, database design, optimization, and performance tuning
  • Experience designing and integrating data from disparate systems into harmonized data models or semantic layers
  • Hands-on experience with modern cloud-based data platforms (e.g Azure, AWS, GCP)
Job Responsibility
Job Responsibility
  • Collaborate with the CDIO and Director of Technology to define a clear data vision aligned with the organization's goals and execute the enterprise data roadmap
  • Serve as a thought leader for data engineering and analytics, guiding the evolution of our data ecosystem and championing data-driven decision-making across the organization
  • Build and mentor a small data team, providing technical direction and performance feedback, fostering best practices and continuous learning, while remaining a hands-on implementor
  • Define and implement best practices, standards, and processes for data engineering, analytics, and data management across the organization
  • Design, implement, and maintain a scalable, reliable, and high-performing modern data infrastructure, aligned with the organizational needs and industry best practices
  • Architect and maintain data lake/lakehouse, warehouse, and related platform components to support analytics, reporting, and operational use cases
  • Establish and enforce data architecture standards, governance models, naming conventions ,and documentation
  • Develop, optimize, and maintain scalable ETL/ELT pipelines and data workflows to collect, transform, normalize, and integrate data from diverse systems
  • Implement robust data quality processes, validation, monitoring, and error-handling frameworks
  • Ensure data is accurate, timely, secure, and ready for self-service analytics and downstream applications
What we offer
What we offer
  • Paid Time Off, Medical, Dental and Vision plans, Retirement plans
  • Public Service Loan Forgiveness (PSLF)
  • Fulltime
Read More
Arrow Right

Senior Data Engineer – Data Engineering & AI Platforms

We are looking for a highly skilled Senior Data Engineer (L2) who can design, bu...
Location
Location
India , Chennai, Madurai, Coimbatore
Salary
Salary:
Not provided
optisolbusiness.com Logo
OptiSol Business Solutions
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Strong hands-on expertise in cloud ecosystems (Azure / AWS / GCP)
  • Excellent Python programming skills with data engineering libraries and frameworks
  • Advanced SQL capabilities including window functions, CTEs, and performance tuning
  • Solid understanding of distributed processing using Spark/PySpark
  • Experience designing and implementing scalable ETL/ELT workflows
  • Good understanding of data modeling concepts (dimensional, star, snowflake)
  • Familiarity with GenAI/LLM-based integration for data workflows
  • Experience working with Git, CI/CD, and Agile delivery frameworks
  • Strong communication skills for interacting with clients, stakeholders, and internal teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain scalable ETL/ELT pipelines across cloud and big data platforms
  • Contribute to architectural discussions by translating business needs into data solutions spanning ingestion, transformation, and consumption layers
  • Work closely with solutioning and pre-sales teams for technical evaluations and client-facing discussions
  • Lead squads of L0/L1 engineers—ensuring delivery quality, mentoring, and guiding career growth
  • Develop cloud-native data engineering solutions using Python, SQL, PySpark, and modern data frameworks
  • Ensure data reliability, performance, and maintainability across the pipeline lifecycle—from development to deployment
  • Support long-term ODC/T&M projects by demonstrating expertise during technical discussions and interviews
  • Integrate emerging GenAI tools where applicable to enhance data enrichment, automation, and transformations
What we offer
What we offer
  • Opportunity to work at the intersection of Data Engineering, Cloud, and Generative AI
  • Hands-on exposure to modern data stacks and emerging AI technologies
  • Collaboration with experts across Data, AI/ML, and cloud practices
  • Access to structured learning, certifications, and leadership mentoring
  • Competitive compensation with fast-track career growth and visibility
  • Fulltime
Read More
Arrow Right

Lead Data Engineer

As a Lead Data Engineer or architect at Made Tech, you'll play a pivotal role in...
Location
Location
United Kingdom , Any UK Office Hub (Bristol / London / Manchester / Swansea)
Salary
Salary:
80000.00 - 96000.00 GBP / Year
madetech.com Logo
Made Tech
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proficiency in Git (inc. Github Actions) and able to explain the benefits of different branch strategies
  • Strong experience in IaC and able to guide how one could deploy infrastructure into different environments
  • Knowledge of handling and transforming various data types (JSON, CSV, etc) with Apache Spark, Databricks or Hadoop
  • Good understanding of possible architectures involved in modern data system design (Data Warehouse, Data Lakes, Data Meshes)
  • Ability to create data pipelines on a cloud environment and integrate error handling within these pipelines
  • You understand how to create reusable libraries to encourage uniformity or approach across multiple data pipelines
  • Able to document and present end-to-end diagrams to explain a data processing system on a cloud environment
  • Some knowledge of how you would present diagrams (C4, UML, etc.)
  • Enthusiasm for learning and self-development
  • You have experience of working on agile delivery-lead projects and can apply agile practices such as Scrum, XP, Kanban
Job Responsibility
Job Responsibility
  • Define, shape and perfect data strategies in central and local government
  • Help public sector teams understand the value of their data, and make the most of it
  • Establish yourself as a trusted advisor in data driven approaches using public cloud services like AWS, Azure and GCP
  • Contribute to our recruitment efforts and take on line management responsibilities
  • Help implement efficient data pipelines & storage
What we offer
What we offer
  • 30 days of paid annual leave
  • Flexible parental leave options
  • Part time remote working for all our staff
  • Paid counselling as well as financial and legal advice
  • 7% employer matched pension
  • Flexible benefit platform which includes a Smart Tech scheme, Cycle to work scheme, and an individual benefits allowance which you can invest in a Health care cash plan or Pension plan
  • Optional social and wellbeing calendar of events
  • Fulltime
Read More
Arrow Right

Data Engineering Lead

The Engineering Lead Analyst is a senior level position responsible for leading ...
Location
Location
Singapore , Singapore
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
  • 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
  • Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
Job Responsibility
Job Responsibility
  • Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy
  • Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement
  • Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data
  • Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data
  • Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness
  • Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions
  • Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations
What we offer
What we offer
  • Equal opportunity employer commitment
  • Accessibility and accommodation support
  • Global workforce benefits
  • Fulltime
Read More
Arrow Right

Data Engineering Lead

Data Engineering Lead a strategic professional who stays abreast of developments...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10-15 years of hands-on experience in Hadoop, Scala, Java, Spark, Hive, Kafka, Impala, Unix Scripting and other Big data frameworks
  • 4+ years of experience with relational SQL and NoSQL databases: Oracle, MongoDB, HBase
  • Strong proficiency in Python and Spark Java with knowledge of core spark concepts (RDDs, Dataframes, Spark Streaming, etc) and Scala and SQL
  • Data Integration, Migration & Large Scale ETL experience (Common ETL platforms such as PySpark/DataStage/AbInitio etc.) - ETL design & build, handling, reconciliation and normalization
  • Data Modeling experience (OLAP, OLTP, Logical/Physical Modeling, Normalization, knowledge on performance tuning)
  • Experienced in working with large and multiple datasets and data warehouses
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and datasets
  • Strong analytic skills and experience working with unstructured datasets
  • Ability to effectively use complex analytical, interpretive, and problem-solving techniques
  • Experience with Confluent Kafka, Redhat JBPM, CI/CD build pipelines and toolchain – Git, BitBucket, Jira
Job Responsibility
Job Responsibility
  • Strategic Leadership: Define and execute the data engineering roadmap for Global Wealth Data, aligning with overall business objectives and technology strategy
  • Team Management: Lead, mentor, and develop a high-performing, globally distributed team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement
  • Architecture and Design: Oversee the design and implementation of robust and scalable data pipelines, data warehouses, and data lakes, ensuring data quality, integrity, and availability for global wealth data
  • Technology Selection and Implementation: Evaluate and select appropriate technologies and tools for data engineering, staying abreast of industry best practices and emerging trends specific to wealth management data
  • Performance Optimization: Continuously monitor and optimize data pipelines and infrastructure for performance, scalability, and cost-effectiveness, ensuring optimal access to global wealth data
  • Collaboration: Partner with business stakeholders, data scientists, portfolio managers, and other technology teams to understand data needs and deliver effective solutions that support investment strategies and client reporting
  • Data Governance: Implement and enforce data governance policies and procedures to ensure data quality, security, and compliance with relevant regulations, particularly around sensitive financial data
  • Fulltime
Read More
Arrow Right

Gcp Data Architect

Location
Location
Poland
Salary
Salary:
Not provided
lingarogroup.com Logo
Lingaro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 6 years of experience as Data Architect, including min. 4 years of experience working with GCP cloud-based infrastructure & systems
  • Deep knowledge of Google Cloud Platform and cloud computing services
  • Strong experience in the Data & Analytics area
  • Strong understanding of data engineering concepts, including data modeling, ETL processes, data pipelines, and data governance
  • Expertise in designing and implementing scalable and efficient data processing frameworks
  • In-depth knowledge of various data technologies and tools, such as columnar databases, relational databases, NoSQL databases, data lakes, data warehouses, and big data frameworks
  • Knowledge of modern data transformation tools (such as DBT, Dataform)
  • Knowledge of at least one orchestration and scheduling tool
  • Programming skills (SQL, Python, other scripting)
  • Tools knowledge: Git, Jira, Confluence, etc.
Job Responsibility
Job Responsibility
  • Collaborate with stakeholders to understand business requirements and translate them into data engineering solutions
  • Design and oversee the overall data architecture and infrastructure, ensuring scalability, performance, security, maintainability, and adherence to industry best practices
  • Define data models and data schemas to meet business needs, considering factors such as data volume, velocity, variety, and veracity
  • Select and integrate appropriate data technologies and tools, such as databases, data lakes, data warehouses, and big data frameworks, to support data processing and analysis
  • Ensure that data engineering solutions align with the organization's long-term data strategy and goals
  • Evaluate and recommend data governance strategies and practices, including data privacy, security, and compliance measures
  • Collaborate with data scientists, analysts, and other stakeholders to define data requirements and enable effective data analysis and reporting
  • Provide technical guidance and expertise to data engineering teams, promoting best practices, and ensuring high-quality deliverables. Support to team throughout the implementation process, answering questions and addressing issues as they arise
  • Oversee the implementation of the solution, ensuring that it is implemented according to the design documents and technical specifications
  • Stay updated with emerging trends and technologies in data engineering, recommending, and implementing innovative solutions as appropriate
What we offer
What we offer
  • Stable employment. On the market since 2008, 1500+ talents currently on board in 7 global sites
  • “Office as an option” model. You can choose to work remotely or in the office
  • Workation. Enjoy working from inspiring locations in line with our workation policy
  • Great Place to Work® certified employer
  • Flexibility regarding working hours and your preferred form of contract
  • Comprehensive online onboarding program with a “Buddy” from day 1
  • Cooperation with top-tier engineers and experts
  • Unlimited access to the Udemy learning platform from day 1
  • Certificate training programs. Lingarians earn 500+ technology certificates yearly
  • Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly
Read More
Arrow Right