We are looking for a Senior HPC and AI Network Software Architect to help build the next generation of scalable AI infrastructure. The role emphasizes distributed training, real‑time inference, and communication efficiency across large systems. You will develop new software and hardware approaches, shape platform evolution through hands‑on innovation, and contribute to designing systems powering the fastest AI workloads globally. Collaborate with our distinguished team of researchers and engineers building software and hardware for AI at an outstanding scale.
What You Will Be Doing
* Build and evolve the architecture of scalable software systems for distributed AI training and inference, focusing on throughput, latency, resiliency, and memory efficiency across cluster‑scale deployments.
* Develop and evaluate next‑generation communication and runtime capabilities in libraries such as NCCL, UCX, and UCC, tailored to the evolving demands of frontier AI workloads.
* Partner with AI framework teams (e.g., TensorFlow, PyTorch, JAX) and internal platform teams to build integrations, explore new approaches, and improve end‑to‑end performance and reliability.
* Collaborate on hardware and system‑level features across GPUs, DPUs, and interconnects to speed up data movement and enable new capabilities for training, inference, and model serving at scale.
* Drive innovation across runtime systems, communication libraries, and AI‑specific protocol layers, helping turn new ideas into practical capabilities and robust implementations.
What We Need To See
* Ph.D., or equivalent industry experience, in computer science, computer engineering, or a closely related field.
* 5+ years of experience in systems programming, parallel or distributed computing, high‑performance networking, or large‑scale data movement, including experience designing and building complex systems.
* Strong programming background in C++, Python, and ideally CUDA or other GPU programming models, with a track record of building production‑quality performance‑critical software.
* Extensive hands‑on experience with AI frameworks (e.g., PyTorch, TensorFlow, JAX) and a solid grasp of how communication libraries and runtime systems facilitate large‑scale training and inference.
* Demonstrated success in developing and refining high‑throughput, low‑latency systems, including the ability to reason across software stacks, hardware capabilities, and system bottlenecks.
* Strong collaboration skills in a multi‑national, interdisciplinary setting, with the ability to contribute ideas, build momentum, and work effectively with senior engineers, researchers, and partner teams.
Ways To Stand Out From The Crowd
* Deep expertise with NCCL, UCX, UCC, or similar communication libraries used in large‑scale AI and HPC workloads.
* Strong background in networking and communication protocols, RDMA, collective communications, congestion‑aware transport, or accelerator‑aware networking.
* Comprehensive knowledge of large model training and inference serving at scale, including communication bottlenecks, scheduling challenges, and system‑level tradeoffs across compute, memory, and fabric.
* Experience crafting hardware‑software co‑design for distributed AI systems, including contributions that advanced GPU, DPU, interconnect, or runtime capabilities.
* Familiarity with infrastructure for deployment of LLMs or transformer‑based models, including sharding, pipelining, expert parallelism, or hybrid parallelism.
#J-18808-Ljbffr