CrawlJobs Logo

Senior AI Data Pipeline Engineer

trimble.com Logo

Trimble Inc.

Location Icon

Location:
United States , Lake Oswego

Category Icon

Job Type Icon

Contract Type:
Employment contract

Salary Icon

Salary:

105600.00 - 145200.00 USD / Year

Job Description:

Shape the Future of Intelligence as our next Senior AI Data Pipeline Engineer! Are you ready to build the backbone of next-generation AI? Trimble is looking for a visionary AI Data Pipeline Engineer to design and scale the sophisticated infrastructure that powers our global data-driven initiatives. You will play a critical role in transforming how the world moves, builds, and grows by engineering high-performance streaming architectures and production-ready AI workflows that deliver real-world impact. About Us: Trimble is a global technology company that connects the physical and digital worlds, transforming the ways work gets done. With relentless innovation in precise positioning, modeling and data analytics, Trimble enables essential industries including construction, geospatial and transportation. Whether it's helping customers build and maintain infrastructure, design and construct buildings, optimize global supply chains or map the world, Trimble is at the forefront, driving productivity and progress. AECO: The Trimble AECO segment provides digital construction solutions that increase precision and productivity for Architecture, Engineering, Construction, and Operations. What Makes This Role Great: In this role, you will be the primary architect of our data evolution within the AECO segment, directly influencing the scalability of AI initiatives and shaping the future of digital construction technology. You will have the unique opportunity to move beyond traditional ETL, owning the deployment of containerized workloads in Kubernetes and integrating emerging AI tooling into production workflows that solve the world's most complex physical challenges.

Job Responsibility:

  • Design, build, and optimize scalable batch and real-time data pipelines
  • Manage and administer Databricks workspaces, clusters, jobs, and performance tuning
  • Develop and maintain streaming architectures using Kafka
  • Implement and manage Change Data Capture (CDC) pipelines using Debezium connectors
  • Deploy, monitor, and manage containerized workloads using Kubernetes
  • Implement CI/CD practices for data engineering workflows
  • Ensure data quality, observability, governance, and security best practices
  • Collaborate with data scientists, ML engineers, and software engineers to deliver production-grade data solutions
  • Support and optimize AI/ML data pipelines and model deployment workflows
  • Troubleshoot production issues and implement performance improvements

Requirements:

  • 3+ years of experience in data engineering or a related field
  • Strong hands-on experience managing and optimizing Databricks
  • Experience building and maintaining streaming pipelines with Kafka
  • Experience implementing Change Data Capture (CDC) using Debezium connectors
  • Practical experience deploying and operating services in Kubernetes
  • Strong proficiency in Python and/or Scala
  • Experience with SQL and distributed data processing frameworks (e.g., Spark)
  • Familiarity with cloud platforms (AWS, Azure, or GCP)
  • Experience with infrastructure-as-code tools (Terraform, etc.)
  • Strong understanding of distributed systems concepts

Nice to have:

  • Experience integrating AI/ML tooling into data pipelines
  • Familiarity with MLOps practices
  • Experience with LLM-based workflows, vector databases, or model serving platforms
  • Knowledge of data lakehouse architectures
  • Experience with monitoring and observability tools
What we offer:
  • Medical
  • Dental
  • Vision
  • Life
  • Disability
  • Time off plans
  • retirement plans
  • tax savings plans for health, dependent care and commuter expenses
  • Paid Parental Leave
  • Employee Stock Purchase Plan
  • Bonus eligible

Additional Information:

Job Posted:
April 23, 2026

Employment Type:
Fulltime
Work Type:
On-site work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior AI Data Pipeline Engineer

Senior Data Engineer

We are looking for a Senior Data Engineer (SDE 3) to build scalable, high-perfor...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
https://cogoport.com/ Logo
Cogoport
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in data engineering, working with large-scale distributed systems
  • Strong proficiency in Python, Java, or Scala for data processing
  • Expertise in SQL and NoSQL databases (PostgreSQL, Cassandra, Snowflake, Apache Hive, Redshift)
  • Experience with big data processing frameworks (Apache Spark, Flink, Hadoop)
  • Hands-on experience with real-time data streaming (Kafka, Kinesis, Pulsar) for logistics use cases
  • Deep knowledge of AWS/GCP/Azure cloud data services like S3, Glue, EMR, Databricks, or equivalent
  • Familiarity with Airflow, Prefect, or Dagster for workflow orchestration
  • Strong understanding of logistics and supply chain data structures, including freight pricing models, carrier APIs, and shipment tracking systems
Job Responsibility
Job Responsibility
  • Design and develop real-time and batch ETL/ELT pipelines for structured and unstructured logistics data (freight rates, shipping schedules, tracking events, etc.)
  • Optimize data ingestion, transformation, and storage for high availability and cost efficiency
  • Ensure seamless integration of data from global trade platforms, carrier APIs, and operational databases
  • Architect scalable, cloud-native data platforms using AWS (S3, Glue, EMR, Redshift), GCP (BigQuery, Dataflow), or Azure
  • Build and manage data lakes, warehouses, and real-time processing frameworks to support analytics, machine learning, and reporting needs
  • Optimize distributed databases (Snowflake, Redshift, BigQuery, Apache Hive) for logistics analytics
  • Develop streaming data solutions using Apache Kafka, Pulsar, or Kinesis to power real-time shipment tracking, anomaly detection, and dynamic pricing
  • Enable AI-driven freight rate predictions, demand forecasting, and shipment delay analytics
  • Improve customer experience by providing real-time visibility into supply chain disruptions and delivery timeline
  • Ensure high availability, fault tolerance, and data security compliance (GDPR, CCPA) across the platform
What we offer
What we offer
  • Work with some of the brightest minds in the industry
  • Entrepreneurial culture fostering innovation, impact, and career growth
  • Opportunity to work on real-world logistics challenges
  • Collaborate with cross-functional teams across data science, engineering, and product
  • Be part of a fast-growing company scaling next-gen logistics platforms using advanced data engineering and AI
  • Fulltime
Read More
Arrow Right

Senior AI Data Engineer

We are looking for a Senior AI Data Engineer to join an exciting project for our...
Location
Location
Poland , Warsaw
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Degree in Computer Science, Data Science, Artificial Intelligence, or a related field
  • Several years of experience in AI and Machine Learning development, preferably in Customer Care solutions
  • Strong proficiency in Python and NLP frameworks
  • Hands-on experience with Azure AI services (e.g., Azure Machine Learning, Cognitive Services, Bot Services)
  • Solid understanding of cloud architectures and microservices on Azure
  • Experience with CI/CD pipelines and MLOps
  • Excellent leadership and communication skills
  • Analytical mindset with strong problem-solving abilities
  • Polish and English at a minimum B2 level.
Job Responsibility
Job Responsibility
  • Lead the development and implementation of AI-powered features for a Customer Care platform
  • Design and deploy Machine Learning and NLP models to automate customer inquiries
  • Collaborate with DevOps and cloud architects to ensure a high-performance, scalable, and secure Azure-based architecture
  • Optimize AI models to enhance customer experience
  • Integrate Conversational AI, chatbots, and language models into the platform
  • Evaluate emerging technologies and best practices in Artificial Intelligence
  • Mentor and guide a team of AI/ML developers.
What we offer
What we offer
  • Flexible working hours
  • Hybrid work model, allowing employees to divide their time between home and modern offices in key Polish cities
  • A cafeteria system that allows employees to personalize benefits by choosing from a variety of options
  • Generous referral bonuses, offering up to PLN6,000 for referring specialists
  • Additional revenue sharing opportunities for initiating partnerships with new clients
  • Ongoing guidance from a dedicated Team Manager for each employee
  • Tailored technical mentoring from an assigned technical leader, depending on individual expertise and project needs
  • Dedicated team-building budget for online and on-site team events
  • Opportunities to participate in charitable initiatives and local sports programs
  • A supportive and inclusive work culture with an emphasis on diversity and mutual respect.
  • Fulltime
Read More
Arrow Right

Data Engineer Senior

We are looking for a highly skilled professional to lead the industrialisation o...
Location
Location
Portugal , Lisbon
Salary
Salary:
Not provided
https://www.inetum.com Logo
Inetum
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Minimum of 5 years’ experience in MLOps, data engineering, or DevOps with a focus on ML/DL/LLM/AI agents in production environments
  • Strong proficiency in Python
  • Hands-on experience with CI/CD tools such as GitLab, Docker, Kubernetes, Jenkins
  • Solid understanding of ML, DL, and LLM models
  • Experience with ML lifecycle tools such as MLflow or DVC
  • Good understanding of model lifecycle, data traceability, and governance frameworks
  • Experience with on-premise and hybrid infrastructures
  • Excellent communication skills and ability to collaborate with remote teams
  • Proactive mindset, technical rigour, and engineering mentality
  • Willingness to learn, document, and standardise best practices
Job Responsibility
Job Responsibility
  • Analyse, monitor, and optimise ML models, tracking their performance
  • Design and implement CI/CD pipelines for ML models and data flows
  • Containerise and deploy models via APIs, batch processes, and streaming
  • Manage model versioning and traceability
  • Ensure continuous improvement and adaptation of AI use cases and ML models
  • Set up monitoring and alerting for model performance
  • Establish incident response protocols in collaboration with IT
  • Maintain dashboards and automated reports on model health
  • Implement validation frameworks for data and models (e.g., Great Expectations, unit tests, stress tests), in collaboration with Group Governance
  • Contribute to documentation and apply technical best practices
What we offer
What we offer
  • Work in a constantly evolving environment
  • Contribute to digital impact
  • Opportunity for growth and development
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer role at UpGuard supporting analytics teams to extract insig...
Location
Location
Australia , Sydney; Melbourne; Brisbane; Hobart
Salary
Salary:
Not provided
https://www.upguard.com Logo
UpGuard
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years of experience with data sourcing, storage and modelling to effectively deliver business value right through to BI platform
  • AI first mindset and experience scaling an Analytics and BI function at another SaaS business
  • Experience with Looker (Explores, Looks, Dashboards, Developer interface, dimensions and measures, models, raw SQL queries)
  • Experience with CloudSQL (PostgreSQL) and BigQuery (complex queries, indices, materialised views, clustering, partitioning)
  • Experience with Containers, Docker and Kubernetes (GKE)
  • Familiarity with n8n for automation
  • Experience with programming languages (Go for ETL workers)
  • Comfortable interfacing with various APIs (REST+JSON or MCP Server)
  • Experience with version control via GitHub and GitHub Flow
  • Security-first mindset
Job Responsibility
Job Responsibility
  • Design, build, and maintain reliable data pipelines to consolidate information from various internal systems and third-party sources
  • Develop and manage comprehensive semantic layer using technologies like LookML, dbt or SQLMesh
  • Implement and enforce data quality checks, validation rules, and governance processes
  • Ensure AI agents have access to necessary structured and unstructured data
  • Create clear, self-maintaining documentation for data models, pipelines, and semantic layer
What we offer
What we offer
  • Great Place to Work certified company
  • Equal Employment Opportunity and Affirmative Action employer
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Senior Data Engineer to design, develop, and optimize data platforms, pipelines,...
Location
Location
United States , Chicago
Salary
Salary:
160555.00 - 176610.00 USD / Year
adtalem.com Logo
Adtalem Global Education
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Master's degree in Engineering Management, Software Engineering, Computer Science, or a related technical field
  • 3 years of experience in data engineering
  • Experience building data platforms and pipelines
  • Experience with AWS, GCP or Azure
  • Experience with SQL and Python for data manipulation, transformation, and automation
  • Experience with Apache Airflow for workflow orchestration
  • Experience with data governance, data quality, data lineage and metadata management
  • Experience with real-time data ingestion tools including Pub/Sub, Kafka, or Spark
  • Experience with CI/CD pipelines for continuous deployment and delivery of data products
  • Experience maintaining technical records and system designs
Job Responsibility
Job Responsibility
  • Design, develop, and optimize data platforms, pipelines, and governance frameworks
  • Enhance business intelligence, analytics, and AI capabilities
  • Ensure accurate data flows and push data-driven decision-making across teams
  • Write product-grade performant code for data extraction, transformations, and loading (ETL) using SQL/Python
  • Manage workflows and scheduling using Apache Airflow and build custom operators for data ETL
  • Build, deploy and maintain both inbound and outbound data pipelines to integrate diverse data sources
  • Develop and manage CI/CD pipelines to support continuous deployment of data products
  • Utilize Google Cloud Platform (GCP) tools, including BigQuery, Composer, GCS, DataStream, and Dataflow, for building scalable data systems
  • Implement real-time data ingestion solutions using GCP Pub/Sub, Kafka, or Spark
  • Develop and expose REST APIs for sharing data across teams
What we offer
What we offer
  • Health, dental, vision, life and disability insurance
  • 401k Retirement Program + 6% employer match
  • Participation in Adtalem’s Flexible Time Off (FTO) Policy
  • 12 Paid Holidays
  • Annual incentive program
  • Fulltime
Read More
Arrow Right

Senior AI Engineer

As a Senior AI Engineer on our AI Engineering team, you will be responsible for ...
Location
Location
India
Salary
Salary:
Not provided
apollo.io Logo
Apollo.io
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 7+ years of software engineering experience with a focus on production systems
  • 1.5+ years of hands-on LLM experience (2023-present) building real applications with GPT, Claude, Llama, or other modern LLMs
  • Demonstrated experience building customer-facing, scalable LLM-powered products with real user usage (not just POCs or internal tools)
  • Experience building multi-step AI agents, LLM chaining, and complex workflow automation
  • Deep understanding of prompting strategies, few-shot learning, chain-of-thought reasoning, and prompt optimization techniques
  • Expert-level Python skills for production AI systems
  • Strong experience building scalable backend systems, APIs, and distributed architectures
  • Experience with LangChain, LlamaIndex, or other LLM application frameworks
  • Proven ability to integrate multiple APIs and services to create advanced AI capabilities
  • Experience deploying and managing AI models in cloud environments (AWS, GCP, Azure)
Job Responsibility
Job Responsibility
  • Design and Deploy Production LLM Systems: Build scalable, reliable AI systems that serve millions of users with high availability and performance requirements
  • Agent Development: Create sophisticated AI agents that can chain multiple LLM calls, integrate with external APIs, and maintain state across complex workflows
  • Prompt Engineering Excellence: Develop and optimize prompting strategies, understand trade-offs between prompt engineering vs fine-tuning, and implement advanced prompting techniques
  • System Integration: Build robust APIs and integrate AI capabilities with existing Apollo infrastructure and external services
  • Evaluation & Quality Assurance: Implement comprehensive evaluation frameworks, A/B testing, and monitoring systems to ensure AI systems meet accuracy, safety, and reliability standards
  • Performance Optimization: Optimize for cost, latency, and scalability across different LLM providers and deployment scenarios
  • Cross-functional Collaboration: Work closely with product teams, backend engineers, and stakeholders to translate business requirements into technical AI solutions
What we offer
What we offer
  • Invest deeply in your growth, ensuring you have the resources, support, and autonomy to own your role and make a real impact
  • Collaboration is at our core—we’re all for one, meaning you’ll have a team across departments ready to help you succeed
  • We encourage bold ideas and courageous action, giving you the freedom to experiment, take smart risks, and drive big wins
Read More
Arrow Right

Senior AI Engineer

As a Senior AI Engineer on our AI Engineering team, you will be responsible for ...
Location
Location
Canada; United States
Salary
Salary:
160000.00 - 260000.00 USD / Year
apollo.io Logo
Apollo.io
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 8+ years of software engineering experience with a focus on production systems
  • 1.5+ years of hands-on LLM experience (2023-present) building real applications with GPT, Claude, Llama, or other modern LLMs
  • Production LLM Applications: Demonstrated experience building customer-facing, scalable LLM-powered products with real user usage (not just POCs or internal tools)
  • Agent Development: Experience building multi-step AI agents, LLM chaining, and complex workflow automation
  • Prompt Engineering Expertise: Deep understanding of prompting strategies, few-shot learning, chain-of-thought reasoning, and prompt optimization techniques
  • Python Proficiency: Expert-level Python skills for production AI systems
  • Backend Engineering: Strong experience building scalable backend systems, APIs, and distributed architectures
  • LangChain or Similar Frameworks: Experience with LangChain, LlamaIndex, or other LLM application frameworks
  • API Integration: Proven ability to integrate multiple APIs and services to create advanced AI capabilities
  • Production Deployment: Experience deploying and managing AI models in cloud environments (AWS, GCP, Azure)
Job Responsibility
Job Responsibility
  • Design and Deploy Production LLM Systems: Build scalable, reliable AI systems that serve millions of users with high availability and performance requirements
  • Agent Development: Create sophisticated AI agents that can chain multiple LLM calls, integrate with external APIs, and maintain state across complex workflows
  • Prompt Engineering Excellence: Develop and optimize prompting strategies, understand trade-offs between prompt engineering vs fine-tuning, and implement advanced prompting techniques
  • System Integration: Build robust APIs and integrate AI capabilities with existing Apollo infrastructure and external services
  • Evaluation & Quality Assurance: Implement comprehensive evaluation frameworks, A/B testing, and monitoring systems to ensure AI systems meet accuracy, safety, and reliability standards
  • Performance Optimization: Optimize for cost, latency, and scalability across different LLM providers and deployment scenarios
  • Cross-functional Collaboration: Work closely with product teams, backend engineers, and stakeholders to translate business requirements into technical AI solutions
What we offer
What we offer
  • equity
  • company bonus or sales commissions/bonuses
  • 401(k) plan
  • at least 10 paid holidays per year, flex PTO, and parental leave
  • employee assistance program and wellbeing benefits
  • global travel coverage
  • life/AD&D/STD/LTD insurance
  • FSA/HSA and medical, dental, and vision benefits
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

As a Senior Software Engineer, you will play a key role in designing and buildin...
Location
Location
United States
Salary
Salary:
156000.00 - 195000.00 USD / Year
apollo.io Logo
Apollo.io
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years experience in platform engineering, data engineering or in a data facing role
  • Experience in building data applications
  • Deep knowledge of data eco system with an ability to collaborate cross-functionally
  • Bachelor's degree in a quantitative field (Physical / Computer Science, Engineering or Mathematics / Statistics)
  • Excellent communication skills
  • Self-motivated and self-directed
  • Inquisitive, able to ask questions and dig deeper
  • Organized, diligent, and great attention to detail
  • Acts with the utmost integrity
  • Genuinely curious and open
Job Responsibility
Job Responsibility
  • Architect and build robust, scalable data pipelines (batch and streaming) to support a variety of internal and external use cases
  • Develop and maintain high-performance APIs using FastAPI to expose data services and automate data workflows
  • Design and manage cloud-based data infrastructure, optimizing for cost, performance, and reliability
  • Collaborate closely with software engineers, data scientists, analysts, and product teams to translate requirements into engineering solutions
  • Monitor and ensure the health, quality, and reliability of data flows and platform services
  • Implement observability and alerting for data services and APIs (think logs, metrics, dashboards)
  • Continuously evaluate and integrate new tools and technologies to improve platform capabilities
  • Contribute to architectural discussions, code reviews, and cross-functional projects
  • Document your work, champion best practices, and help level up the team through knowledge sharing
What we offer
What we offer
  • Equity
  • Company bonus or sales commissions/bonuses
  • 401(k) plan
  • At least 10 paid holidays per year
  • Flex PTO
  • Parental leave
  • Employee assistance program and wellbeing benefits
  • Global travel coverage
  • Life/AD&D/STD/LTD insurance
  • FSA/HSA and medical, dental, and vision benefits
  • Fulltime
Read More
Arrow Right