CrawlJobs Logo

Senior AI Platform Engineer - Data and Knowledge

getmaintainx.com Logo

MaintainX

Location Icon

Location:

Category Icon

Job Type Icon

Contract Type:
Not provided

Salary Icon

Salary:

Not provided

Job Description:

We’re hiring a Senior AI Platform Engineer (Knowledge & Data) to build and own the LLM knowledge and retrieval layer that powers AI features across MaintainX. You’ll build backend services and pipelines that transform raw documents, APIs, and semi-structured data into production-grade knowledge systems, with clear patterns for ingestion, retrieval, ranking, and caching. Your work enables product teams to ship AI features quickly without re-solving knowledge preparation, retrieval quality, or reliability problems.

Job Responsibility:

  • Build scalable backend services and internal APIs for the AI platform
  • Integrate LLMs and retrieval into reliable, production-ready workflows
  • Build knowledge ingestion pipelines for LLMs (documents, APIs, semi-structured data)
  • Design chunking and embedding approaches together with vector DB data models and indexing strategies
  • Implement retrieval pipelines (semantic, keyword, hybrid) and caching
  • Contribute to shared infrastructure: CI/CD, observability, deployments

Requirements:

  • 5+ years of experience in Python backend engineering and systems design experience
  • Experience shipping AI-powered or LLM-integrated backend systems
  • Experience with vector DBs (Qdrant/Pinecone/Chroma/etc.)
  • Understanding of embeddings, chunking, and retrieval strategies
  • Experience building search or retrieval systems over unstructured data
  • Comfort working across multiple layers (services, data, infra, AI tooling)
What we offer:
  • Competitive salary and meaningful equity opportunities
  • Healthcare, dental, and vision coverage
  • 401(k) / RRSP enrollment program
  • Take what you need PTO

Additional Information:

Job Posted:
February 18, 2026

Work Type:
Remote work
Job Link Share:

Looking for more opportunities? Search for other job offers that match your skills and interests.

Briefcase Icon

Similar Jobs for Senior AI Platform Engineer - Data and Knowledge

Senior Data Engineer

We are looking for a Senior Data Engineer (SDE 3) to build scalable, high-perfor...
Location
Location
India , Mumbai
Salary
Salary:
Not provided
https://cogoport.com/ Logo
Cogoport
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 6+ years of experience in data engineering, working with large-scale distributed systems
  • Strong proficiency in Python, Java, or Scala for data processing
  • Expertise in SQL and NoSQL databases (PostgreSQL, Cassandra, Snowflake, Apache Hive, Redshift)
  • Experience with big data processing frameworks (Apache Spark, Flink, Hadoop)
  • Hands-on experience with real-time data streaming (Kafka, Kinesis, Pulsar) for logistics use cases
  • Deep knowledge of AWS/GCP/Azure cloud data services like S3, Glue, EMR, Databricks, or equivalent
  • Familiarity with Airflow, Prefect, or Dagster for workflow orchestration
  • Strong understanding of logistics and supply chain data structures, including freight pricing models, carrier APIs, and shipment tracking systems
Job Responsibility
Job Responsibility
  • Design and develop real-time and batch ETL/ELT pipelines for structured and unstructured logistics data (freight rates, shipping schedules, tracking events, etc.)
  • Optimize data ingestion, transformation, and storage for high availability and cost efficiency
  • Ensure seamless integration of data from global trade platforms, carrier APIs, and operational databases
  • Architect scalable, cloud-native data platforms using AWS (S3, Glue, EMR, Redshift), GCP (BigQuery, Dataflow), or Azure
  • Build and manage data lakes, warehouses, and real-time processing frameworks to support analytics, machine learning, and reporting needs
  • Optimize distributed databases (Snowflake, Redshift, BigQuery, Apache Hive) for logistics analytics
  • Develop streaming data solutions using Apache Kafka, Pulsar, or Kinesis to power real-time shipment tracking, anomaly detection, and dynamic pricing
  • Enable AI-driven freight rate predictions, demand forecasting, and shipment delay analytics
  • Improve customer experience by providing real-time visibility into supply chain disruptions and delivery timeline
  • Ensure high availability, fault tolerance, and data security compliance (GDPR, CCPA) across the platform
What we offer
What we offer
  • Work with some of the brightest minds in the industry
  • Entrepreneurial culture fostering innovation, impact, and career growth
  • Opportunity to work on real-world logistics challenges
  • Collaborate with cross-functional teams across data science, engineering, and product
  • Be part of a fast-growing company scaling next-gen logistics platforms using advanced data engineering and AI
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

As a Senior Software Engineer, you will play a key role in designing and buildin...
Location
Location
United States
Salary
Salary:
156000.00 - 195000.00 USD / Year
apollo.io Logo
Apollo.io
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 5+ years experience in platform engineering, data engineering or in a data facing role
  • Experience in building data applications
  • Deep knowledge of data eco system with an ability to collaborate cross-functionally
  • Bachelor's degree in a quantitative field (Physical / Computer Science, Engineering or Mathematics / Statistics)
  • Excellent communication skills
  • Self-motivated and self-directed
  • Inquisitive, able to ask questions and dig deeper
  • Organized, diligent, and great attention to detail
  • Acts with the utmost integrity
  • Genuinely curious and open
Job Responsibility
Job Responsibility
  • Architect and build robust, scalable data pipelines (batch and streaming) to support a variety of internal and external use cases
  • Develop and maintain high-performance APIs using FastAPI to expose data services and automate data workflows
  • Design and manage cloud-based data infrastructure, optimizing for cost, performance, and reliability
  • Collaborate closely with software engineers, data scientists, analysts, and product teams to translate requirements into engineering solutions
  • Monitor and ensure the health, quality, and reliability of data flows and platform services
  • Implement observability and alerting for data services and APIs (think logs, metrics, dashboards)
  • Continuously evaluate and integrate new tools and technologies to improve platform capabilities
  • Contribute to architectural discussions, code reviews, and cross-functional projects
  • Document your work, champion best practices, and help level up the team through knowledge sharing
What we offer
What we offer
  • Equity
  • Company bonus or sales commissions/bonuses
  • 401(k) plan
  • At least 10 paid holidays per year
  • Flex PTO
  • Parental leave
  • Employee assistance program and wellbeing benefits
  • Global travel coverage
  • Life/AD&D/STD/LTD insurance
  • FSA/HSA and medical, dental, and vision benefits
  • Fulltime
Read More
Arrow Right

Senior AI Engineer

Elsewhen, a London-based consultancy, designs and builds technology solutions fo...
Location
Location
United Kingdom , London
Salary
Salary:
Not provided
elsewhen.com Logo
Elsewhen
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Professional AI engineering experience
  • Background in Software Engineering with Python
  • Solid understanding of the Python standard library and modern Python coding, testing, debugging and automation techniques
  • Hands-on experience building solutions using LLMs and Agentic architectures with ADK, LlamaIndex, or LangGraph
  • Working with vector databases for embedding and indexing
  • Strong experience with cloud platforms
  • Strong experience with API design and frameworks like FastAPI or Flask
  • Solid experience with relational databases and SQL
  • Interest in expanding your knowledge into GenAI and machine learning
  • Excellent communication skills and the ability to work well in a collaborative team environment
Job Responsibility
Job Responsibility
  • Experiment with POCs to find solutions for real-world problems using Large Language Models
  • Collaborate on AI-driven projects, working alongside engineers, product managers and AI specialists while maintaining clear documentation
  • Build and deploy Agentic LLM-based solutions with LangGraph
  • Familiar with different multi agent system patterns
  • Build and deploy LLM-based solutions using RAG
  • Familiar with different types of databases: Relational, Graph etc
  • Design and optimise APIs using Python and FastAPI to serve AI solutions
  • Familiar with GCP ecosystem and Cloudrun
  • Build and optimise data pipelines for vector search and knowledge retrieval using Vector databases and embedding models
What we offer
What we offer
  • Private Health Insurance: Comprehensive coverage for both physical and mental health
  • Flexible and Remote-First Work Environment: Choose how and where you work, with the option for weekly team meet-ups in central London
  • Generous Leave Policy: 27 days of holiday plus bank holidays
  • Family-friendly policies, including enhanced maternity, paternity and shared
  • Learning and Development: Individual annual budget of £2,000 for learning and development, with dedicated learning days
  • Feel Better Fund: £500 to help set up your remote office
  • Social Events: Monthly and quarterly team events, an annual team trip, and half-yearly social events
  • Gym Membership Contribution: Support for maintaining your physical health
  • Pension Contribution: Enhanced employer pension contribution of 6%
  • Bonus Opportunities: Potential to receive a discretionary (non-contractual) bonus based on business and personal achievements
Read More
Arrow Right

Senior Data Engineer

Kiddom is redefining how technology powers learning. We combine world-class curr...
Location
Location
United States , San Francisco
Salary
Salary:
150000.00 - 220000.00 USD / Year
kiddom.co Logo
Kiddom
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 3+ years of experience as a data engineer
  • 8+ years of software engineering experience (including data engineering)
  • Proven experience as a Data Engineer or in a similar role with strong data modeling, architecture, and design skills
  • Strong understanding of data engineering principles including infrastructure deployment, governance and security
  • Experience with MySQL, Snowflake, Cassandra and familiarity with Graph databases. (Neptune or Neo4J)
  • Proficiency in SQL, Python, (Golang)
  • Proficient with AWS offerings such as AWS Glue, EKS, ECS and Lambda
  • Excellent communication skills, with the ability to articulate complex technical concepts to non-technical stakeholders
  • Strong understanding of PII compliance and best practices in data handling and storage
  • Strong problem-solving skills, with a knack for optimizing performance and ensuring data integrity and accuracy
Job Responsibility
Job Responsibility
  • Design, implement, and maintain the organization’s data infrastructure, ensuring it meets business requirements and technical standards
  • Deploy data pipelines to AWS infrastructure such as EKS, ECS, Lambdas and AWS Glue
  • Develop and deploy data pipelines to clean and transform data to support other engineering teams, analytics and AI applications
  • Extract and deploy reusable features to Feature stores such as Feast or equivalent
  • Evaluate and select appropriate database technologies, tools, and platforms, both on-premises and in the cloud
  • Monitor data systems and troubleshoot issues related to data quality, performance, and integrity
  • Work closely with other departments, including Product, Engineering, and Analytics, to understand and cater to their data needs
  • Define and document data workflows, pipelines, and transformation processes for clear understanding and knowledge sharing
What we offer
What we offer
  • Meaningful equity
  • Health insurance benefits: medical (various PPO/HMO/HSA plans), dental, vision, disability and life insurance
  • One Medical membership (in participating locations)
  • Flexible vacation time policy (subject to internal approval). Average use 4 weeks off per year
  • 10 paid sick days per year (pro rated depending on start date)
  • Paid holidays
  • Paid bereavement leave
  • Paid family leave after birth/adoption. Minimum of 16 paid weeks for birthing parents, 10 weeks for caretaker parents. Meant to supplement benefits offered by State
  • Commuter and FSA plans
  • Fulltime
Read More
Arrow Right

Senior Azure Data Engineer

Seeking a Lead AI DevOps Engineer to oversee design and delivery of advanced AI/...
Location
Location
Poland
Salary
Salary:
Not provided
lingarogroup.com Logo
Lingaro
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • At least 6 years of professional experience in the Data & Analytics area
  • 1+ years of experience (or acting as) in the Senior Consultant or above role with a strong focus on data solutions build in Azure and Databricks/Synapse/(MS Fabric is nice to have)
  • Proven experience in Azure cloud-based infrastructure, Databricks and one of SQL implementation (e.g., Oracle, T-SQL, MySQL, etc.)
  • Proficiency in programming languages such as SQL, Python, PySpark is essential (R or Scala nice to have)
  • Very good level of communication including ability to convey information clearly and specifically to co-workers and business stakeholders
  • Working experience in the agile methodologies – supporting tools (JIRA, Azure DevOps)
  • Experience in leading and managing a team of data engineers, providing guidance, mentorship, and technical support
  • Knowledge of data management principles and best practices, including data governance, data quality, and data integration
  • Good project management skills, with the ability to prioritize tasks, manage timelines, and deliver high-quality results within designated deadlines
  • Excellent problem-solving and analytical skills, with the ability to identify and resolve complex data engineering issues
Job Responsibility
Job Responsibility
  • Act as a senior member of the Data Science & AI Competency Center, AI Engineering team, guiding delivery and coordinating workstreams
  • Develop and execute a cloud data strategy aligned with organizational goals
  • Lead data integration efforts, including ETL processes, to ensure seamless data flow
  • Implement security measures and compliance standards in cloud environments
  • Continuously monitor and optimize data solutions for cost-efficiency
  • Establish and enforce data governance and quality standards
  • Leverage Azure services, as well as tools like dbt and Databricks, for efficient data pipelines and analytics solutions
  • Work with cross-functional teams to understand requirements and provide data solutions
  • Maintain comprehensive documentation for data architecture and solutions
  • Mentor junior team members in cloud data architecture best practices
What we offer
What we offer
  • Stable employment
  • “Office as an option” model
  • Workation
  • Great Place to Work® certified employer
  • Flexibility regarding working hours and your preferred form of contract
  • Comprehensive online onboarding program with a “Buddy” from day 1
  • Cooperation with top-tier engineers and experts
  • Unlimited access to the Udemy learning platform from day 1
  • Certificate training programs
  • Upskilling support
Read More
Arrow Right

Senior Data Engineer

A VC-backed conversational AI scale-up is expanding its engineering team and is ...
Location
Location
United States
Salary
Salary:
Not provided
weareorbis.com Logo
Orbis Consultants
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years in software development and data engineering with ownership of production-grade systems
  • Proven expertise in Spark/PySpark
  • Strong knowledge of distributed computing and modern data modeling approaches
  • Solid programming skills in Python, with an emphasis on clean, maintainable code
  • Hands-on experience with SQL and NoSQL databases (e.g., PostgreSQL, DynamoDB, Cassandra)
  • Excellent communicator who can influence and partner across teams
Job Responsibility
Job Responsibility
  • Design and evolve distributed, cloud-based data infrastructure that supports both real-time and batch processing at scale
  • Build high-performance data pipelines that power analytics, AI/ML workloads, and integrations with third-party platforms
  • Champion data reliability, quality, and observability, introducing automation and monitoring across pipelines
  • Collaborate closely with engineering, product, and AI teams to deliver data solutions for business-critical initiatives
What we offer
What we offer
  • great equity
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

A VC-backed conversational AI scale-up is expanding its engineering team and is ...
Location
Location
United States
Salary
Salary:
Not provided
weareorbis.com Logo
Orbis Consultants
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • 4+ years in software development and data engineering with ownership of production-grade systems
  • Proven expertise in Spark/PySpark
  • Strong knowledge of distributed computing and modern data modeling approaches
  • Solid programming skills in Python, with an emphasis on clean, maintainable code
  • Hands-on experience with SQL and NoSQL databases (e.g., PostgreSQL, DynamoDB, Cassandra)
  • Excellent communicator who can influence and partner across teams
Job Responsibility
Job Responsibility
  • Design and evolve distributed, cloud-based data infrastructure that supports both real-time and batch processing at scale
  • Build high-performance data pipelines that power analytics, AI/ML workloads, and integrations with third-party platforms
  • Champion data reliability, quality, and observability, introducing automation and monitoring across pipelines
  • Collaborate closely with engineering, product, and AI teams to deliver data solutions for business-critical initiatives
What we offer
What we offer
  • great equity
  • Fulltime
Read More
Arrow Right

Senior Data Engineer

Adtalem is a data driven organization. The Data Engineering team builds data sol...
Location
Location
United States , Lisle
Salary
Salary:
84835.61 - 149076.17 USD / Year
adtalem.com Logo
Adtalem Global Education
Expiration Date
Until further notice
Flip Icon
Requirements
Requirements
  • Bachelor's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field.
  • Master's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field.
  • Two (2) plus years experience in Google cloud with services like BigQuery, Composer, GCS, DataStream, Dataflows, BQML, Vertex AI.
  • Six (6) plus years experience in data engineering solutions such as data platforms, ingestion, data management, or publication/analytics.
  • Hands-on experience working with real-time, unstructured, and synthetic data.
  • Experience in Real Time Data ingestion using GCP PubSub, Kafka, Spark or similar.
  • Expert knowledge on Python programming and SQL.
  • Experience with cloud platforms (AWS, GCP, Azure) and their data services
  • Experience working with Airflow as workflow management tools and build operators to connect, extract and ingest data as needed.
  • Familiarity with synthetic data generation and unstructured data processing
Job Responsibility
Job Responsibility
  • Architect, develop, and optimize scalable data pipelines handling real-time, unstructured, and synthetic datasets
  • Collaborate with cross-functional teams, including data scientists, analysts, and product owners, to deliver innovative data solutions that drive business growth.
  • Design, develop, deploy and support high performance data pipelines both inbound and outbound.
  • Model data platform by applying the business logic and building objects in the semantic layer of the data platform.
  • Leverage streaming technologies and cloud platforms to enable real-time data processing and analytics
  • Optimize data pipelines for performance, scalability, and reliability.
  • Implement CI/CD pipelines to ensure continuous deployment and delivery of our data products.
  • Ensure quality of critical data elements, prepare data quality remediation plans and collaborate with business and system owners to fix the quality issues at its root.
  • Document the design and support strategy of the data pipelines
  • Capture, store and socialize data lineage and operational metadata
What we offer
What we offer
  • Health, dental, vision, life and disability insurance
  • 401k Retirement Program + 6% employer match
  • Participation in Adtalem’s Flexible Time Off (FTO) Policy
  • 12 Paid Holidays
  • Eligible to participate in an annual incentive program
  • Fulltime
Read More
Arrow Right