Member of Technical Staff - Inference at Prime Intellect

**Who this is for** This position is for experienced engineers who want to build and optimize the infrastructure for serving Large Language Models at scale. If

Work type: remote

Location: Remote | Remote

Salary: $150,000 – $300,000/yr

Type: Full-time

Summary

**Who this is for** This position is for experienced engineers who want to build and optimize the infrastructure for serving Large Language Models at scale. If you have a deep understanding of inference systems and a passion for cloud-native ML platforms, this role is for you. **Key highlights** You will play a critical role in advancing our LLM serving capabilities and optimizing inference systems for our RL training stack. This involves building multi-tenant serving platforms, implementing GPU-aware scheduling, and contributing to inference frameworks. **You might be a good fit if you...** - Have 3+ years building and running large-scale ML/LLM services with clear latency/availability SLOs. - Have hands-on experience with LLM inference frameworks like vLLM or SGLang. - Possess a deep understanding of inference internals, including KV-cache, batching, and parallelism strategies. - Are comfortable debugging across the full stack, including CUDA/NCCL and containerization.

Job Description

## Building Open Superintelligence Infrastructure

Prime Intellect is building the open superintelligence stack - from frontier agentic models to the infra that enables anyone to create, train, and deploy them. We aggregate and orchestrate global compute into a single control plane and pair it with the full rl post-training stack: environments, secure sandboxes, verifiable evals, and our async RL trainer. We enable researchers, startups and enterprises to run end-to-end reinforcement learning at frontier scale, adapting models to real tools, workflows, and deployment contexts.

We recently raised $15mm in funding (total of $20mm raised) led by Founders Fund, with participation from Menlo Ventures and prominent angels including Andrej Karpathy (Eureka AI, Tesla, OpenAI), Tri Dao (Chief Scientific Officer of Together AI), Dylan Patel (SemiAnalysis), Clem Delangue (Huggingface), Emad Mostaque (Stability AI) and many others.

## Role Impact

This is a hybrid position spanning cloud LLM serving, LLM inference optimization and RL systems. You will be working on advancing our ability to evaluate and serve models trained with our RL Lab at scale. The two key areas are:

1. Building the infrastructure to serve LLMs efficiently at scale.

2. Optimization and integration of inference systems into our RL training stack.

## Core Technical Responsibilities

LLM Serving






Inference Optimization & Performance






Platform & Tooling




## Technical Requirements

Required Experience






Infrastructure Skills






Nice to Have






## What We Offer







## Growth Opportunity

You'll join a team of experienced engineers and researchers working on cutting-edge problems in AI infrastructure. We believe in open development and encourage team members to contribute to the broader AI community through research and open-source contributions.

We value potential over perfection. If you're passionate about democratizing AI development, we want to talk to you.

Ready to help shape the future of AI? Apply now and join us in our mission to make powerful AI models accessible to everyone.

View this job on nocollar jobs