Job Description

Description
AWS Neuron is the software stack powering AWS Inferentia and Trainium machine learning accelerators, designed to deliver high-performance, low-cost inference at scale. The Neuron Serving team develops infrastructure to serve modern machine learning models—including large language models (LLMs) and multimodal workloads—reliably and efficiently on AWS silicon. We are seeking a Software Development Engineer to lead and architect our next-generation model serving infrastructure, with a particular focus on large-scale generative AI applications.

Key job responsibilities
* Architect and lead the design of distributed ML serving systems optimized for generative AI workloads
* Drive technical excellence in performance optimization and system reliability across the Neuron ecosystem
* Design and implement scalable solutions for both offline and online inference workloads
* Lead integration efforts with frameworks such as vLLM, SGLang, Torch XLA, TensorRT, and ...

Ready to Apply?

Take the next step in your AI career. Submit your application to Amazon today.

Submit Application