**Who this is for** A senior-level software engineer with a specialized background in distributed systems and deep learning infrastructure, specifically focused
Work type: remote
Location: US, CA, Santa Clara | US, CA, Remote
Type: Full-time
**Who this is for** A senior-level software engineer with a specialized background in distributed systems and deep learning infrastructure, specifically focused on the challenges of reinforcement learning for large-scale AI models. **Key highlights** You will build the infrastructure required to scale RL training, inference, and rollout loops, contributing to open-source frameworks and working closely with researchers to push the boundaries of agentic AI. **You might be a good fit if you...** - Have 5+ years of experience in distributed systems, HPC, or ML infrastructure. - Demonstrate deep expertise in Python, C++, and PyTorch internals (FSDP, tensor parallelism). - Have successfully deployed and maintained large-scale training systems at a major tech company or AI lab. - Understand the unique systems-level challenges of RLHF, DPO, and reward modeling at scale.
Reinforcement learning post-training is driving some of the most significant capability gains in AI today. It is the process that teaches a model to reason through hard problems, follow complex instructions, and act as an autonomous agent. It is also one of the hardest infrastructure challenges in the field. RL requires inference, rollout generation, and training running in a continuous loop. The rollout step is what makes it hard: the model must interact with environments, tools, and other models to produce the signal that drives learning. Coordinating actor, critic, and reward models across heterogeneous hardware at scale pushes the limits of what distributed systems can do.
NVIDIA is building an RL Frameworks engineering team to develop the open-source tools and infrastructure that AI researchers and post-training teams depend on. The team spans the full software stack, from collaborating closely with the researchers and labs pushing the frontier, to contributing to RL frameworks like [VeRL](https://github.com/volcengine/verl), [Miles](https://github.com/radixark/miles), and [TorchTitan](https://github.com/pytorch/torchtitan), to improving the distributed runtimes they depend on, including [Ray](https://github.com/ray-project/ray) and [Monarch](https://github.com/meta-pytorch/monarch). Whether your strength is working with researchers to understand and address their need optimizing deep learning frameworks, or building distributed infrastructure, we want to hear from you. Come join us to build the systems that enable the next generation of AI.
What you will be doing:
You will architect and build RL post-training infrastructure that scales efficiently from experimentation on a single GPU to production across thousands of nodes. This means tuning RL training-inference-rollout loops on GPUs, CPUs, and LPUs for performance where it matters, contributing to and improving the performance and usability of open-source RL frameworks, and partnering with the teams who own them. The role also spans fault tolerance, elastic scaling, and fast restarts so long-running distributed training jobs survive failures, stragglers, and resource contention.
Beyond GPU-accelerated training, this work includes partnering with teams building CPU-driven rollout workloads, including tool-use, code execution, and agentic environments, supplying the systems and framework engineering needed to run them efficiently alongside GPU- or LPU-accelerated generation and GPU-accelerated training. It also means advocating for researcher and partner needs with NVIDIA's networking, math library, and compiler teams so the capabilities RL workloads require get prioritized and delivered, and working with hardware teams to take advantage of next-generation hardware capabilities in post-training workloads.
What we need to see:
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.
You will also be eligible for equity and [benefits](https://www.nvidia.com/en-us/benefits/).
Applications for this job will be accepted at least until April 27, 2026.
This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.