CrawlJobs Logo

Senior Data & AI/ML Engineer - GCP Specialization Lead

techjays.com Logo

techjays

Location Icon

Location:
United States, Menlo Park

Category Icon
Category:
IT - Software Development

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We are on a bold mission to create the best software services offering in the world to work on startups in nascent industries and greenfield projects to large scale enterprises. As a growth stage company, we combine a depth of capabilities and resources of our leadership with the ambition, culture and agility of a startup.

Job Responsibility:

  • Design and implement data architectures for real-time and batch pipelines, leveraging GCP services such as BigQuery, Dataflow, Dataproc, Pub/Sub, Vertex AI, and Cloud Storage
  • Lead the development of ML pipelines, from feature engineering to model training and deployment using Vertex AI, AI Platform, and Kubeflow Pipelines
  • Collaborate with data scientists to operationalize ML models and support MLOps practices using Cloud Functions, CI/CD, and Model Registry
  • Define and implement data governance, lineage, monitoring, and quality frameworks
  • Build and document GCP-native solutions and architectures that can be used for case studies and specialization submissions
  • Lead client-facing PoCs or MVPs to showcase AI/ML capabilities using GCP
  • Contribute to building repeatable solution accelerators in Data & AI/ML
  • Work with the leadership team to align with Google Cloud Partner Program metrics
  • Mentor engineers and data scientists toward achieving GCP certifications, especially in Data Engineering and Machine Learning
  • Organize and lead internal GCP AI/ML enablement sessions
  • Represent the company in Google partner ecosystem events, tech talks, and joint GTM engagements

Requirements:

  • GCP Services: BigQuery, Dataflow, Pub/Sub, Vertex AI
  • ML Engineering: End-to-end ML pipelines using Vertex AI / Kubeflow
  • Programming: Python & SQL
  • MLOps: CI/CD for ML, Model deployment & monitoring
  • Infrastructure-as-Code: Terraform
  • Data Engineering: ETL/ELT, real-time & batch pipelines
  • AI/ML Tools: TensorFlow, scikit-learn, XGBoost
  • Min Experience: 10+ Years
What we offer:
  • Best in class packages
  • Paid holidays and flexible paid time away
  • Casual dress code & flexible working environment
  • Medical Insurance covering self & family up to 4 lakhs per person

Additional Information:

Job Posted:
December 12, 2025

Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior Data & AI/ML Engineer - GCP Specialization Lead

New

Lead Data Integration Specialist / Senior Full Stack Engineer

This young and agile company, providing identity risk solutions, is currently se...
Location
Location
United States , New York
Salary
Salary:
Not provided
weareorbis.com Logo
Orbis Consultants
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10+ years of hands-on experience in developing production-ready software
  • Experience maintaining and working with data integrations / external API sources
  • Demonstrates skill in manoeuvring both front-end and backend technical projects, adept at prioritising tasks, defining requirements, and facilitating productive discussions within the team
  • Brings to the table a collaborative mindset, having effectively led engineering teams
  • Demonstrates a remarkable ability to adapt swiftly to the evolving needs of our growing organisation and dynamic product landscape
  • Proficient in client-side technologies such as TypeScript, JavaScript (ES6/React), HTML/CSS
  • Server-side proficiency in Python (Django)
  • Holds practical experience in managing relational databases, with a strong command over PostgreSQL
Job Responsibility
Job Responsibility
  • Design and build a scalable platform that simplifies the creation/operation of hundreds of data partner integrations
  • Liaise with engineers, designers, and product managers to translate our product and technical vision into a concrete roadmap
  • Partner with third-party vendors & our clients to gather requirements and co-create solutions
  • Craft high-quality, thoroughly-tested code that meets the unique requirements of our clients
  • Provide technical mentorship and guidance to fellow engineers
What we offer
What we offer
  • Competitive Salary
  • Competitive Package
  • Opportunity to work with an Ambitious, Young, Growing Organisation
  • Unlimited PTO
  • Flexible work policy
  • Fulltime
Read More
Arrow Right

Senior Data Engineer ETL Lead

The Sr Data Engineer ETL Lead is a senior level position responsible for establi...
Location
Location
United States , Irving
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of relevant experience in Apps Development or systems analysis role
  • Extensive experience system analysis and in programming of software applications
  • Experience in managing and implementing successful projects
  • Subject Matter Expert (SME) in at least one area of Applications Development
  • Ability to adjust priorities quickly as circumstances dictate
  • Demonstrated leadership and project management skills
  • Consistently demonstrates clear and concise written and verbal communication
  • Bachelor’s degree in Computer Science or equivalent /University degree or equivalent experience
  • Data Warehouse/ETL design and development methodologies knowledge and experience required
  • ETL expertise on AbInitio tool (EME, GDE, Co-op) bringing together all components like Unix, Oracle, Storage
Job Responsibility
Job Responsibility
  • Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
  • Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
  • Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
  • Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency
What we offer
What we offer
  • Medical, dental & vision coverage
  • 401(k)
  • life, accident, and disability insurance
  • wellness programs
  • paid time off packages including vacation, sick leave and paid holidays
  • Fulltime
Read More
Arrow Right

Senior AI Data Engineer

We are looking for a Senior AI Data Engineer to join an exciting project for our...
Location
Location
Poland , Warsaw
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Degree in Computer Science, Data Science, Artificial Intelligence, or a related field
  • Several years of experience in AI and Machine Learning development, preferably in Customer Care solutions
  • Strong proficiency in Python and NLP frameworks
  • Hands-on experience with Azure AI services (e.g., Azure Machine Learning, Cognitive Services, Bot Services)
  • Solid understanding of cloud architectures and microservices on Azure
  • Experience with CI/CD pipelines and MLOps
  • Excellent leadership and communication skills
  • Analytical mindset with strong problem-solving abilities
  • Polish and English at a minimum B2 level.
Job Responsibility
Job Responsibility
  • Lead the development and implementation of AI-powered features for a Customer Care platform
  • Design and deploy Machine Learning and NLP models to automate customer inquiries
  • Collaborate with DevOps and cloud architects to ensure a high-performance, scalable, and secure Azure-based architecture
  • Optimize AI models to enhance customer experience
  • Integrate Conversational AI, chatbots, and language models into the platform
  • Evaluate emerging technologies and best practices in Artificial Intelligence
  • Mentor and guide a team of AI/ML developers.
What we offer
What we offer
  • Flexible working hours
  • Hybrid work model, allowing employees to divide their time between home and modern offices in key Polish cities
  • A cafeteria system that allows employees to personalize benefits by choosing from a variety of options
  • Generous referral bonuses, offering up to PLN6,000 for referring specialists
  • Additional revenue sharing opportunities for initiating partnerships with new clients
  • Ongoing guidance from a dedicated Team Manager for each employee
  • Tailored technical mentoring from an assigned technical leader, depending on individual expertise and project needs
  • Dedicated team-building budget for online and on-site team events
  • Opportunities to participate in charitable initiatives and local sports programs
  • A supportive and inclusive work culture with an emphasis on diversity and mutual respect.
  • Fulltime
Read More
Arrow Right
New

Senior AI/ML Engineer

At Sigma, we’re not just adding AI—we’re building the future of how people work ...
Location
Location
United States , San Francisco
Salary
Salary:
240000.00 - 270000.00 USD / Year
sigmacomputing.com Logo
Sigma Computing
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 10+ years of experience building and deploying production-grade AI/ML systems
  • Deep knowledge of machine learning, deep learning, and applied AI
  • Experience across the full ML lifecycle: data curation, training, deployment, monitoring
  • A track record of building things that ship—whether it’s recommendations, search, machine translation, or something equally complex
  • Experience adapting or training foundation models (language or multimodal) for novel domains
Job Responsibility
Job Responsibility
  • Partner with product, design, and engineering teams to identify high-impact AI/ML opportunities
  • Prototype and productionize AI systems that feel intuitive but do a lot under the hood—recommendations, natural language interfaces, agentic workflows, and more
  • Develop and scale AI/ML infrastructure that powers both internal tooling and customer-facing features
  • Tackle novel UX problems at the intersection of AI, Visualizations, and apps
What we offer
What we offer
  • Equity
  • Generous health benefits
  • Flexible time off policy
  • Paid bonding time for all new parents
  • Traditional and Roth 401k
  • Commuter and FSA benefits
  • Lunch Program
  • Dog friendly office
  • Fulltime
Read More
Arrow Right
New

Senior ML/AI Engineer

Provectus helps companies adopt ML/AI to transform the ways they operate, compet...
Location
Location
Colombia , Medellín; Bogotá; Cali; Barranquilla
Salary
Salary:
Not provided
provectus.com Logo
Provectus
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Comfortable with standard ML algorithms and underlying math
  • Practical experience with solving classification and regression tasks in general, feature engineering
  • Practical experience with ML models in production: orchestrating workflows, monitoring metrics
  • Practical experience with one or more use cases from the following: NLP, LLMs, and Recommendation engines
  • Solid software engineering skills (i.e., ability to produce well-structured modules, not only notebook scripts)
  • Python expertise, Docker
  • Practical experience with cloud platforms (AWS stack is preferred, e.g. Amazon SageMaker, GCP, ECS, EMR/Glue, S3, Lambda, SQS)
  • English level - Upper Intermediate
  • Excellent communication and problem-solving skills
Job Responsibility
Job Responsibility
  • Create ML models from scratch or improve existing models
  • Create ML/AI pipelines that include custom models or APIs as part of the processing
  • Collaborate with the engineering team, data scientists, and product managers on production models
  • Develop experimentation roadmap
  • Set up a reproducible experimentation environment and maintain experimentation pipelines
  • Monitor and maintain ML models in production to ensure optimal performance
  • Write clear and comprehensive documentation for ML models, processes, and pipelines
Read More
Arrow Right

Senior AI ML Engineer

We are seeking a highly skilled and experienced Assistant Vice President (AVP), ...
Location
Location
India , Pune
Salary
Salary:
Not provided
https://www.citi.com/ Logo
Citi
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's or Master's degree in Computer Science, Data Science, Artificial Intelligence, Machine Learning, Statistics, or a related quantitative field
  • Minimum of 6+ years of professional experience in Data Science, Machine Learning Engineering, or a similar role, with a strong track record of deploying ML models to production
  • Proven experience in a lead or senior technical role
  • Expert-level proficiency in Python programming, including experience with relevant data science libraries (e.g., Pandas, NumPy, Scikit-learn) and deep learning frameworks (e.g., TensorFlow, PyTorch)
  • Strong hands-on experience designing, developing, and deploying RESTful APIs using FastAPI
  • Solid understanding and practical experience with CI/CD tools and methodologies (e.g., Jenkins, GitLab CI, GitHub Actions, Azure DevOps) for MLOps
  • Experience with MLOps platforms, model monitoring, and model versioning
  • Experience with at least one major cloud provider (e.g., AWS, Azure, GCP) for deploying and managing ML workloads
  • Proficiency in SQL and experience working with relational and/or NoSQL databases
  • Deep understanding of machine learning algorithms, statistical modeling, and data mining techniques
Job Responsibility
Job Responsibility
  • Design, develop, and implement advanced machine learning models (e.g., predictive, prescriptive, generative AI) to solve complex business problems, from initial data exploration and feature engineering to model training and evaluation
  • Lead the deployment of AI/ML models into production environments, ensuring scalability, reliability, and performance
  • Build and maintain robust, high-performance APIs (using frameworks like FastAPI) to serve machine learning models and integrate them with existing applications and systems
  • Establish and manage continuous integration and continuous deployment (CI/CD) pipelines for ML code and model deployments, promoting automation and efficiency
  • Collaborate with data engineers to ensure optimal data pipelines and data quality for model development and deployment
  • Conduct rigorous experimentation, A/B testing, and model performance monitoring to continuously improve and optimize AI/ML solutions
  • Promote and enforce best practices in software development, including clean code, unit testing, documentation, and version control
  • Mentor junior team members, contribute to technical discussions, and drive the adoption of new technologies and methodologies within the team
  • Effectively communicate complex technical concepts and model results to both technical and non-technical stakeholders.
What we offer
What we offer
  • Not explicitly stated.
  • Fulltime
Read More
Arrow Right

GCP Senior Data Platform Engineer

HSBC is seeking an experienced professional for the role of GCP Data Platform Te...
Location
Location
Poland
Salary
Salary:
Not provided
https://www.hsbc.com Logo
HSBC
Expiration Date
January 01, 2026
Flip Icon
Requirements
Requirements
  • Strong programming skills in Python (libraries, API connection, token usage)
  • demonstrable experience preparing and presenting architecture artefacts to design boards
  • experience providing feedback and technical knowledge to facilitate peer code review
  • using architecture patterns to accelerate decisions and design
  • experience with cloud and cloud architectures in GCP (GCP cloud composer, BigQuery, dataflow, Google Cloud Storage, Service accounts, GCP pub/sub, etc.)
  • knowledge of a wide range of technologies and solutions, using these to design creative and innovative solutions
  • a track record of owning and delivering solutions across a broad spectrum of the project delivery lifecycle involving mixed-shore resource in a complex stakeholder environment
  • the ability to communicate efficiently upwards (to business), downwards (to IT teams) and laterally (to peers, vendors, and client-side staff)
  • familiarity with technology concepts, roles, and terminology, and the ability to work closely with application architects and modelers
  • experience with API design and micro-service architectures
Job Responsibility
Job Responsibility
  • Work for the Data Platform Tech Manager, the Technical Lead needs to manage all technical aspects
  • define and maintain the technology stack and roadmap
  • provide key decisions in terms of stack, design and code quality
  • ensure the solution is aligned with HSBC standards in terms of architecture, controls, security, scalability, performance
  • resolve technical issues with help of the technical development Team and perform code reviews
  • under the direction and management of the Data Platform Manager and Product Owner, the Technical Lead will collaborate with the technical dev team, Scrum Master, Architect, Business Analysts providing the Data Requirements.
What we offer
What we offer
  • Competitive salary
  • annual performance-based bonus
  • additional bonuses for recognition awards
  • Multisport card
  • private medical care
  • life insurance
  • one-time reimbursement of home office set-up (up to 800 PLN)
  • corporate parties & events
  • CSR initiatives
  • nursery discounts
  • Fulltime
Read More
Arrow Right
New

Senior Data Engineer

We are seeking a highly skilled and motivated Senior Data Engineer/s to architec...
Location
Location
India , Hyderabad
Salary
Salary:
Not provided
techmahindra.com Logo
Tech Mahindra
Expiration Date
January 30, 2026
Flip Icon
Requirements
Requirements
  • 7-10 years of experience in data engineering with a focus on Microsoft Azure and Fabric technologies
  • Strong expertise in: Microsoft Fabric (Lakehouse, Dataflows Gen2, Pipelines, Notebooks)
  • Strong expertise in: Azure Data Factory, Azure SQL, Azure Data Lake Storage Gen2
  • Strong expertise in: Power BI and/or other visualization tools
  • Strong expertise in: Azure Functions, Logic Apps, and orchestration frameworks
  • Strong expertise in: SQL, Python and PySpark/Scala
  • Experience working with structured and semi structured data (JSON, XML, CSV, Parquet)
  • Proven ability to build metadata driven architectures and reusable components
  • Strong understanding of data modeling, data governance, and security best practices
Job Responsibility
Job Responsibility
  • Design and implement ETL pipelines using Microsoft Fabric (Dataflows, Pipelines, Lakehouse ,warehouse, sql) and Azure Data Factory
  • Build and maintain a metadata driven Lakehouse architecture with threaded datasets to support multiple consumption patterns
  • Develop agent specific data lakes and an orchestration layer for an uber agent that can query across agents to answer customer questions
  • Enable interactive data consumption via Power BI, Azure OpenAI, and other analytics tools
  • Ensure data quality, lineage, and governance across all ingestion and transformation processes
  • Collaborate with product teams to understand data needs and deliver scalable solutions
  • Optimize performance and cost across storage and compute layers
Read More
Arrow Right
New

Senior Data Engineer

As a Data Software Engineer, you will leverage your skills in data science, mach...
Location
Location
United States , Arlington; Woburn
Salary
Salary:
134000.00 - 184000.00 USD / Year
str.us Logo
STR
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Ability to obtain a Top Secret (TS) security clearance, for which U.S citizenship is needed by the U.S government
  • 5+ years of experience in one or more high level programming languages, like Python
  • Experience working with large datasets for machine learning applications
  • Experience in navigating and contributing to complex, large code bases
  • Experience with containerization practices and CI/CD workflows
  • BS, MS, or PhD in a related field or equivalent experience
Job Responsibility
Job Responsibility
  • Explore a variety of data sources to build and integrate machine learning models into software
  • Develop machine learning and deep learning algorithms for large-scale multi-modal problems
  • Work with large data sets and develop software solutions for scalable analysis
  • Collaborate to create and maintain software for data pipelines, algorithms, storage, and access
  • Monitor software deployments, create logging frameworks, and design APIs
  • Build analytic tools that utilize data pipelines to provide actionable insights into customer requests
  • Develop and execute plans to improve software robustness and ensure system performance
  • Fulltime
Read More
Arrow Right
New

Senior Data Engineer

SoundCloud is looking for a Senior Data Engineer to join our growing Content Pla...
Location
Location
Germany; United Kingdom , Berlin; London
Salary
Salary:
Not provided
soundcloud.com Logo
SoundCloud
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Proven experience in backend engineering (Scala/Go/Python) with strong design and data modeling skills
  • Hands-on experience building ETL/ELT pipelines and streaming solutions on cloud platforms (GCP preferred)
  • Proficient in SQL and experienced with relational and NoSQL databases
  • Familiarity with event-driven architectures and messaging systems (Pub/Sub, Kafka, etc.)
  • Knowledge of data governance, schema management, and versioning best practices
  • Understanding observability practices: logging, metrics, tracing, and incident response
  • Experience with containerization and orchestration (Docker, Kubernetes)
  • Experience deploying and managing services in cloud environments, preferably GCP, AWS
  • Strong collaboration skills and ability to work across backend, data, and product teams
Job Responsibility
Job Responsibility
  • Design, build, and maintain high-performance services for content modeling, serving, and integration
  • Develop data pipelines (batch & streaming) with cloud native tools
  • Collaborate on rearchitecting the content model to support rich metadata
  • Implement APIs and data services that power internal products, external integrations, and real-time features
  • Ensure data quality, governance, and validation across ingestion, storage, and serving layers
  • Optimize system performance, scalability, and cost efficiency for both backend services and data workflows
  • Work with infrastructure-as-code (Terraform) and CI/CD pipelines for deployment and automation
  • Monitor, debug, and improve reliability using various observability tools (logging, tracing, metrics)
  • Collaborate with product leadership, music industry experts, and engineering teams across SoundCloud
What we offer
What we offer
  • Extensive relocation support including allowances, one way flights, temporary accommodation and on the ground support on arrival
  • Creativity and Wellness benefit
  • Employee Equity Plan
  • Generous professional development allowance
  • Flexible vacation and public holiday policy where you can take up to 35 days of PTO annually
  • 16 paid weeks for all parents (birthing and non-birthing), regardless of gender, to welcome newborns, adopted and foster children
  • Free German courses at beginning, intermediate and advanced
  • Various snacks, goodies, and 2 free lunches weekly when at the office
  • Fulltime
Read More
Arrow Right
New

Senior Data Engineer

Senior Data Engineer – Dublin (Hybrid) Contract Role | 3 Days Onsite. We are see...
Location
Location
Ireland , Dublin
Salary
Salary:
Not provided
solasit.ie Logo
Solas IT Recruitment
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of experience as a Data Engineer working with distributed data systems
  • 4+ years of deep Snowflake experience, including performance tuning, SQL optimization, and data modelling
  • Strong hands-on experience with the Hadoop ecosystem: HDFS, Hive, Impala, Spark (PySpark preferred)
  • Oozie, Airflow, or similar orchestration tools
  • Proven expertise with PySpark, Spark SQL, and large-scale data processing patterns
  • Experience with Databricks and Delta Lake (or equivalent big-data platforms)
  • Strong programming background in Python, Scala, or Java
  • Experience with cloud services (AWS preferred): S3, Glue, EMR, Redshift, Lambda, Athena, etc.
Job Responsibility
Job Responsibility
  • Build, enhance, and maintain large-scale ETL/ELT pipelines using Hadoop ecosystem tools including HDFS, Hive, Impala, and Oozie/Airflow
  • Develop distributed data processing solutions with PySpark, Spark SQL, Scala, or Python to support complex data transformations
  • Implement scalable and secure data ingestion frameworks to support both batch and streaming workloads
  • Work hands-on with Snowflake to design performant data models, optimize queries, and establish solid data governance practices
  • Collaborate on the migration and modernization of current big-data workloads to cloud-native platforms and Databricks
  • Tune Hadoop, Spark, and Snowflake systems for performance, storage efficiency, and reliability
  • Apply best practices in data modelling, partitioning strategies, and job orchestration for large datasets
  • Integrate metadata management, lineage tracking, and governance standards across the platform
  • Build automated validation frameworks to ensure accuracy, completeness, and reliability of data pipelines
  • Develop unit, integration, and end-to-end testing for ETL workflows using Python, Spark, and dbt testing where applicable
Read More
Arrow Right
New

Senior Data Engineer

We are looking for a foundational member of the Data Team to enable Skydio to ma...
Location
Location
United States , San Mateo
Salary
Salary:
170000.00 - 230000.00 USD / Year
skydio.com Logo
Skydio
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of professional experience
  • 2+ years in software engineering
  • 2+ years in data engineering with a bias towards getting your hands dirty
  • Deep experience with Databricks or Palantir Foundry, including building pipelines, managing datasets, and developing dashboards or analytical applications
  • Proven track record of operating scalable data platforms, defining company-wide patterns that ensure reliability, performance, and cost effectiveness
  • Proficiency in SQL and at least one modern programming language (for example, Python or Java)
  • Strong communication skills, with the ability to collaborate effectively across all levels and functions
  • Demonstrated ability to lead technical direction, mentor teammates, and promote engineering excellence and best practices across the organization
  • Familiarity with AI-assisted data workflows, including tools that accelerate data transformations or enable natural-language interfaces for analytics
Job Responsibility
Job Responsibility
  • Design and scale the data infrastructure that ingests live telemetry from tens of thousands of autonomous drones
  • Build and evolve our Databricks and Palantir Foundry environments
  • Develop data systems that make our products truly data-driven
  • Create and integrate AI-powered tools for data analysis, transformation, and pipeline generation
  • Champion a data-driven culture by defining and enforcing best practices for data quality, lineage, and governance
  • Collaborate with autonomy, manufacturing, and operations teams to unify how data flows across the company
  • Lead and mentor data engineers, analysts, and stakeholders across Skydio
  • Ensure platform reliability by implementing robust monitoring, observability, and contributing to the on-call rotation for critical data systems
What we offer
What we offer
  • Equity in the form of stock options
  • Comprehensive benefits packages
  • Relocation assistance may also be provided for eligible roles
  • Group health insurance plans
  • Paid vacation time
  • Sick leave
  • Holiday pay
  • 401K savings plan
  • Fulltime
Read More
Arrow Right
Welcome to CrawlJobs.com
Your Global Job Discovery Platform
At CrawlJobs.com, we simplify finding your next career opportunity by bringing job listings directly to you from all corners of the web. Using cutting-edge AI and web-crawling technologies, we gather and curate job offers from various sources across the globe, ensuring you have access to the most up-to-date job listings in one place.