This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Shape the Future of Intelligence as our next Senior AI Data Pipeline Engineer! Are you ready to build the backbone of next-generation AI? Trimble is looking for a visionary AI Data Pipeline Engineer to design and scale the sophisticated infrastructure that powers our global data-driven initiatives. You will play a critical role in transforming how the world moves, builds, and grows by engineering high-performance streaming architectures and production-ready AI workflows that deliver real-world impact. About Us: Trimble is a global technology company that connects the physical and digital worlds, transforming the ways work gets done. With relentless innovation in precise positioning, modeling and data analytics, Trimble enables essential industries including construction, geospatial and transportation. Whether it's helping customers build and maintain infrastructure, design and construct buildings, optimize global supply chains or map the world, Trimble is at the forefront, driving productivity and progress. AECO: The Trimble AECO segment provides digital construction solutions that increase precision and productivity for Architecture, Engineering, Construction, and Operations. What Makes This Role Great: In this role, you will be the primary architect of our data evolution within the AECO segment, directly influencing the scalability of AI initiatives and shaping the future of digital construction technology. You will have the unique opportunity to move beyond traditional ETL, owning the deployment of containerized workloads in Kubernetes and integrating emerging AI tooling into production workflows that solve the world's most complex physical challenges.
Job Responsibility:
Design, build, and optimize scalable batch and real-time data pipelines
Manage and administer Databricks workspaces, clusters, jobs, and performance tuning
Develop and maintain streaming architectures using Kafka
Implement and manage Change Data Capture (CDC) pipelines using Debezium connectors
Deploy, monitor, and manage containerized workloads using Kubernetes
Implement CI/CD practices for data engineering workflows
Ensure data quality, observability, governance, and security best practices
Collaborate with data scientists, ML engineers, and software engineers to deliver production-grade data solutions
Support and optimize AI/ML data pipelines and model deployment workflows
Troubleshoot production issues and implement performance improvements
Requirements:
3+ years of experience in data engineering or a related field
Strong hands-on experience managing and optimizing Databricks
Experience building and maintaining streaming pipelines with Kafka
Experience implementing Change Data Capture (CDC) using Debezium connectors
Practical experience deploying and operating services in Kubernetes
Strong proficiency in Python and/or Scala
Experience with SQL and distributed data processing frameworks (e.g., Spark)
Familiarity with cloud platforms (AWS, Azure, or GCP)
Experience with infrastructure-as-code tools (Terraform, etc.)
Strong understanding of distributed systems concepts
Nice to have:
Experience integrating AI/ML tooling into data pipelines
Familiarity with MLOps practices
Experience with LLM-based workflows, vector databases, or model serving platforms
Knowledge of data lakehouse architectures
Experience with monitoring and observability tools
What we offer:
Medical
Dental
Vision
Life
Disability
Time off plans
retirement plans
tax savings plans for health, dependent care and commuter expenses