This list contains only the countries for which job offers have been published in the selected language (e.g., in the French version, only job offers written in French are displayed, and in the English version, only those in English).
Building Open Superintelligence Infrastructure. Prime Intellect is building the open superintelligence stack - from frontier agentic models to the infra that enables anyone to create, train, and deploy them. We aggregate and orchestrate global compute into a single control plane and pair it with the full rl post-training stack: environments, secure sandboxes, verifiable evals, and our async RL trainer. We enable researchers, startups and enterprises to run end-to-end reinforcement learning at frontier scale, adapting models to real tools, workflows, and deployment contexts. As a Research Engineer in our Reasoning team, you'll play a crucial role in shaping our technological direction, focusing on our test-time compute scaling research ideas. If you love working with synthetic data and teach LLMs reasoning abilities, this role is for you.
Job Responsibility:
Lead and participate in novel research to build a massive scale synthetic data generation pipeline and orchestration solution
Optimize the performance, cost, and resource utilization of AI inference workloads by leveraging the most recent advances for compute & memory optimization techniques
Contribute to the development of our open-source libraries and frameworks for synthetic data generation and distributed RL frameworks
Publish research in top-tier AI conferences such as ICML & NeurIPS
Distill highly technical project outcomes in layman approachable technical blogs to our customers and developers
Stay up-to-date with the latest advancements in AI/ML infrastructure and tools, synthetic data gen research and proactively identify opportunities to enhance our platform's capabilities and user experience
Requirements:
Strong background in AI/ML engineering, with extensive experience in designing and implementing end-to-end pipelines for the inference or training of large-scale AI models
Deep expertise in distributed inference techniques and frameworks (e.g. vllm, sglang) for optimizing the performance and scalability of AI workloads
Solid understanding of MLOps best practices, including model versioning, experiment tracking, and continuous integration/deployment (CI/CD) pipelines
Passion for advancing the state-of-the-art in reasoning and democratizing access to AI capabilities for researchers, developers, and businesses worldwide
What we offer:
Competitive compensation, including equity incentives, aligning your success with the growth and impact of Prime Intellect
Flexible work arrangements, with the option to work remotely or in-person at our offices in San Francisco
Visa sponsorship and relocation assistance for international candidates
Quarterly team off-sites, hackathons, conferences and learning opportunities
Opportunity to work with a talented, hard-working and mission-driven team, united by a shared passion for leveraging technology to accelerate science and AI