Job Description

AWS Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machine
learning accelerators. This role is for a senior software engineer in the Machine Learning Inference Applications team. This role is responsible for development and performance optimization of core building blocks of LLM Inference - Attention, MLP, Quantization, Speculative Decoding, Mixture of Experts, etc.

The team works side by side with chip architects, compiler engineers and runtime engineers to deliver performance and accuracy on Neuron devices across a range of models such as Llama 3.3 70B, 3.1 405B, DBRX, Mixtral, and so on.

Key job responsibilities
Responsibilities of this role include adapting latest research in LLM optimization to Neuron chips to extract best performance from both open source as well as internally developed models. Working across teams and organizations is key.

About the team
Our team is dedicated to supporting new members....

Ready to Apply?

Take the next step in your AI career. Submit your application to Amazon Development Center U.S., Inc. today.

Submit Application