This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
As a Full Stack Developer in our Data & AI team, you will build and maintain scalable data pipelines in Palantir Foundry for advanced reporting and analytics while collaborating with cross-functional teams .You will work closely with key stakeholders in Engineering, Product, GTM, and other groups to help build scalable data solutions that support key metrics, reporting, and insights. You will spend approximately 70% of your time designing, building, and optimizing backend data pipelines and AI/ML solutions on Palantir foundry. The remaining 30% will focus on developing frontend features and dashboards using TypeScript, ensuring an integrated experience for analytics, reporting, and end users. Your work will support enterprise-scale data engineering, analytics, and AI-driven initiatives.
Job Responsibility:
Develop, and maintain scalable backend data pipelines and transformation workflows using PySpark in Palantir Foundry
Design and implement robust feature engineering processes, and ML model integration, enabling clean, reliable, and auditable datasets for data science and AI applications
Implement MLOps and DataOps best practices for model lifecycle management, monitoring, deployment, and governance
Collaborate closely with engineering, product, and business teams to translate complex analytics and AI requirements into backend processes and data products
Develop self-serve analytics solutions and interactive dashboards using TypeScript (and frameworks such as React), translating backend data architecture into accessible insights for business and data users
Contribute to prompt engineering and operationalizing generative AI/LLM-driven features within backend and frontend workflows
Continuously expand your expertise in cloud data infrastructure, Foundry's ontology-driven approach, and best-practice enterprise data management
Requirements:
2+ years of writing clear and reliable PySpark code
Proficiency with React, JavaScript, TypeScript, or comparable frontend technologies, plus backend expertise in distributed computing frameworks
2+ years of experience in product design, UI/UX optimization, feature scoping end-to-end product, and system architecture such as design patterns, reusability, reliability and scaling of existing applications/systems
Adept at implementing solutions using popular analytics tools (like pandas, numpy, matplotlib, MLFlow) and agent orchestration frameworks like Langchain, Langsmith, LlamaIndex, etc.
Deep understanding of operationalizing GenAI and agentic frameworks across backend and frontend
MLOps and LLMOps experience is a plus
At least 2+ years building and deploying full-stack, data-centric apps (preferably in Palantir Foundry)
Experience in Python/PySpark for backend (70%) and frontend like TypeScript, React etc (30%)
Experience translating backend outputs into usable interfaces
Experience in LLM orchestration/agentic frameworks
DevOps/DataOps: CI/CD (GitHub Actions, GitLab CI, or similar), version control (Git)
Cloud: Palantir Foundry, Azure / AWS
Product: UI/UX optimization, feature scoping, CI/CD, business alignment
No visa sponsorship. ILR/Citizenship required
Nice to have:
Minimum 2+ years of work experience in commercial analytics
Experience working on Palantir Foundry
Experience with Generative AI (GenAI) and agentic systems will be considered a strong plus
Have a proactive and adaptable mindset: willing to take initiative, self-starter, learn new skills, and contribute to different aspects of a project as needed to drive solutions from start to finish, even beyond the formal job description