Principal Machine Learning Engineer, Distributed vLLM Inference
Join to apply for the
Principal Machine Learning Engineer, Distributed vLLM Inference
role at
Red Hat
Job Summary
At Red Hat we believe the future of AI is open and we are on a mission to bring the power of open‑source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers, maintainers of the vLLM and LLM‑D projects, and inventors of state‑of‑the‑art techniques for model quantization and sparsification, our team provides a stable platform for enterprises to build, optimize, and scale LLM deployments.
Join us in shaping the future of AI!
What You Will Do
Develop and maintain distributed inference infrastructure leveraging Kubernetes APIs, operators, and the Gateway Inference Extension API for scalable LLM deployments.
Create system components in Go and / or Rust to integrate with the vLLM project and manage distributed inference workloads.
Design and implement KV cache‑aware routing and scoring algorithms to optimize memory utilization and request distribution in large‑scale inference deployments.
Enhance the resource utilization, fault tolerance, and stability of the inference stack.
Contribute to the design, development, and testing of various inference optimization algorithms.
Actively participate in technical design discussions and propose innovative solutions to complex challenges.
Provide timely and constructive code reviews.
Mentor and guide fellow engineers, fostering a culture of continuous learning and innovation.
What You Will Bring
Strong proficiency in Python, GoLang and at least one of the following : Rust, or C++.
Experience with cloud‑native Kubernetes service mesh technologies / stacks such as Istio, Cilium, Envoy (WASM filters), and CNI.
A solid understanding of Layer 7 networking, HTTP / 2, gRPC, and the fundamentals of API gateways and reverse proxies.
Working knowledge of high‑performance networking protocols and technologies including UCX, RoCE, InfiniBand, and RDMA is a plus.
Excellent communication skills, capable of interacting effectively with both technical and non‑technical team members.
A Bachelor's or Master's degree in computer science, computer engineering, or a related field.
Preferred Qualifications
Experience with the Kubernetes ecosystem, including core concepts, custom APIs, operators, and the Gateway API inference extension for GenAI workloads.
Experience with GPU performance benchmarking and profiling tools like NVIDIA Nsight or distributed tracing libraries / techniques like OpenTelemetry.
Ph.D. in an ML‑related domain is a significant advantage.
The salary range for this position is $189,600.00 – $312,730.00. Actual offer will be based on your qualifications.
Benefits
Comprehensive medical, dental, and vision coverage
Flexible Spending Account – healthcare and dependent care
Health Savings Account – high deductible medical plan
Retirement 401(k) with employer match
Paid time off and holidays
Paid parental leave plans for all new parents
Leave benefits including disability, paid family medical leave, and paid military leave
Additional benefits including employee stock purchase plan, family planning reimbursement, tuition reimbursement, transportation expense account, employee assistance program, and more!
Equal Opportunity Policy (EEO)
Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, marital status, or any other basis prohibited by law.
Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email
application-assistance@redhat.com .
#J-18808-Ljbffr
Machine Learning Engineer • Boston, Massachusetts, United States