Job Description

Job Summary
At Red Hat we believe the future of AI is open and we are on a mission to bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to GenAI deployments. As leading developers, maintainers of the vLLM and LLM-D projects, and inventors of state-of-the-art techniques for model quantization and sparsification, our team provides a stable platform for enterprises to build, optimize, and scale LLM deployments.

As a Principal Machine Learning Engineer focused on distributed infrastructure in the project, you will collaborate with our team to tackle the most pressing challenges in scalable inference systems and Kubernetes-native deployments. Your work with distributed systems and cloud infrastructure will directly impact enterprise AI deployments. You would be joining the core team behind.! If you want to solve challenging technical problems in distributed systems ...

Ready to Apply?

Take the next step in your AI career. Submit your application to Red Hat, LLC today.

Submit Application