MTS - Distributed Inferencing Software Engineer - AI Models
More details
The Person Strong technical and analytical skills in C++/Python AI development, solving performance and investigating scalability on multi-GPU, multi-node clusters. Key Responsibilities Enable, benchmark AI models on distributed systems Work in a distributed computing setting to optimize for both scale-up (multi-GPU) / scale-out (multi-node) / scale-across systems Collaborate and interact with internal GPU library teams to analyze and optimize distributed workloads for high throughput/low latency Expertise on parallelization strategies for AI workloads - and application for best performance for each configuration Contribute to distributed model management, model zoos, monitoring, benchmarking and documentation Preferred Experience Knowledge of GPU computing (HIP, CUDA, OpenCL) AI framework engineering experience (vLLM, SGLang, Llama.cpp) Understanding of KV cache transfer mechanisms, options (Mooncake, NIXL/RIXL) and Expert Parallelization (DeepEP/MORI/PPLX-Garden) Excellent C/C++/Python programming and software design skills, including debugging, performance analysis, and test design.