Principal Machine Learning Engineer, vLLM and llama.cpp Inference
At Red Hat we believe the future of AI is open and we are on a mission to bring the power of open‑source LLMs and vLLM to every enterprise. The Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers and maintainers of the vLLM project, we invent state‑of‑the‑art techniques for model quantization and sparsification, providing a stable platform for enterprises to build, optimize, and scale LLM deployments. You will be joining the core team behind 2025’s most popular open‑source project on GitHub.
What You Will Do
Build and maintain distributed inference infrastructure using Kubernetes APIs, operators, and the Gateway Inference Extension API for scalable LLM deployments
Develop systems components in Go and / or Rust to integrate with the vLLM project and manage distributed inference workloads
Design and implement KV‑cache aware routing and scoring algorithms to optimize memory utilization and request distribution across large‑scale inference deployments
Improve the resource utilization, fault tolerance, and stability of the inference stack
Contribute to the design, development, and testing of various inference optimization algorithms
Participate in technical design discussions and provide innovative solutions to complex problems
Give thoughtful and prompt code reviews
Mentor and guide other engineers and foster a culture of continuous learning and innovation
What You Will Bring
Strong proficiency in Python and one or more system programming languages (Golang, Rust, C++)
Strong understanding of computer architecture, parallel processing, and distributed computing concepts
Experience with the Kubernetes ecosystem, including custom APIs, operators, and the Gateway API inference extension for GenAI workloads (nice to have)
Experience with cloud‑native Kubernetes service mesh technologies / stacks like Istio, Cilium, Envoy (WASM filters) and CNI
Experience with tensor math libraries such as PyTorch
Working understanding of high‑performance networking protocols and technologies including UCX, RoCE, InfiniBand, and RDMA
Mathematical software, especially linear algebra or signal processing
Deep understanding and experience in GPU performance optimizations
Experience optimizing kernels for deep neural networks
Experience with profiling tools like NVIDIA Nsight or distributed tracing libraries / techniques like OpenTelemetry is a plus
Strong communication skills with both technical and non‑technical team members
BS, MS in computer science or computer engineering or a related field. A PhD in an ML related domain is considered a plus
Compensation
The salary range for this position is $189,600.00 – $312,730.00. Actual offer will be based on your qualifications. Red Hat determines compensation based on several factors including but not limited to job location, experience, applicable skills and training, external market value, and internal pay equity. The position may also be eligible for bonus, commission, and / or equity. For Remote‑US locations, the actual salary range may differ based on location but will be commensurate with job duties and relevant work experience.
Benefits
Comprehensive medical, dental, and vision coverage
Flexible Spending Account – healthcare and dependent care
Health Savings Account – high deductible medical plan
Retirement 401(k) with employer match
Paid time off and holidays
Paid parental leave plans for all new parents
Leave benefits including disability, paid family medical leave, and paid military leave
Additional benefits including employee stock purchase plan, family planning reimbursement, tuition reimbursement, transportation expense account, employee assistance program, and more
Note : These benefits are only applicable to full‑time, permanent associates at Red Hat located in the United States.
Equal Opportunity Policy (EEO)
Red Hat is proud to be an equal–opportunity workplace and an affirmative action employer. We review applications for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law.
Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application‑assistance@redhat.com.
#J-18808-Ljbffr
Machine Learning Engineer • Boston, Massachusetts, United States