The Product : AWS Machine Learning accelerators are at the forefront of AWS innovation. The Inferentia chip delivers best-in-class ML inference performance at the lowest cost in the cloud.
Trainium will deliver the best-in-class ML training performance with the most teraflops (TFLOPS) of compute power for ML in the cloud.
This is all enabled by a cutting edge software stack, the AWS Neuron Software Development Kit (SDK), which includes an ML compiler, runtime and natively integrates into popular ML frameworks, such as PyTorch, TensorFlow and MxNet.
The Neuron SDK optimizes performance of complex neural net models executed on AWS Inferentia and Trainium. AWS Neuron is used at scale with customers and partners like PyTorch, Epic Games, Snap, AirBnB, Autodesk, Amazon Alexa, Amazon Rekognition and more customers in various other segments.
The Team : The Amazon Annapurna Labs team is responsible for building innovation in silicon and software for AWS customers.
We are at the forefront of innovation by combining cloud scale with the world’s most talented engineers. Our team covers multiple disciplines including silicon engineering, hardware design and verification, software and operations.
With such breadth of talent, there's opportunity to learn all of the time. We operate in spaces that are very large, yet our teams remain small and agile.
There is no blueprint. We're inventing. We're experimenting. When you couple that with the ability to work on so many different products and services, it's a very unique learning culture.
Learn more about Our History :
You : As a Manager III on the AWS Neuron team, you'll be leading a team of compiler engineers through developing, deploying, and scaling a compiler targeting AWS Inferentia and Trainium.
You'll need to be technically capable, credible and curious in your own right as a trusted AWS Neuron Manager, innovating on behalf of our customers.
You’ll leverage your vision and technical communication skills as a hands-on partner to AWS ML services teams, to be involved in pre-silicon design, bring new products / optimizations / features to market, and many other exciting projects to ensure the Neuron SDK exceeds our customers' needs of high performance, low cost, and ease of use.
You will have deep knowledge of resource management, scheduling, code generation, optimization, and new instruction architectures including CPU, NPU, GPU and novel forms of compute.
Explore the Product :
In order to be considered for this role, candidates must be currently located or willing to relocate to Toronto.
Key job responsibilities
The Product : AWS Machine Learning accelerators are at the forefront of AWS innovation. The Inferentia chip delivers best-in-class ML inference performance at the lowest cost in the cloud.
Trainium will deliver the best-in-class ML training performance with the most teraflops (TFLOPS) of compute power for ML in the cloud.
This is all enabled by a cutting edge software stack, the AWS Neuron Software Development Kit (SDK), which includes an ML compiler, runtime and natively integrates into popular ML frameworks, such as PyTorch, TensorFlow and MxNet.
The Neuron SDK optimizes performance of complex neural net models executed on AWS Inferentia and Trainium. AWS Neuron is used at scale with customers and partners like PyTorch, Epic Games, Snap, AirBnB, Autodesk, Amazon Alexa, Amazon Rekognition and more customers in various other segments.
The Team : The Amazon Annapurna Labs team is responsible for building innovation in silicon and software for AWS customers.
We are at the forefront of innovation by combining cloud scale with the world’s most talented engineers. Our team covers multiple disciplines including silicon engineering, hardware design and verification, software and operations.
With such breadth of talent, there's opportunity to learn all of the time. We operate in spaces that are very large, yet our teams remain small and agile.
There is no blueprint. We're inventing. We're experimenting. When you couple that with the ability to work on so many different products and services, it's a very unique learning culture.
Learn more about Our History :
You : As a Manager III on the AWS Neuron team, you'll be leading a team of compiler engineers through developing, deploying, and scaling a compiler targeting AWS Inferentia and Trainium.
You'll need to be technically capable, credible and curious in your own right as a trusted AWS Neuron Manager, innovating on behalf of our customers.
You’ll leverage your vision and technical communication skills as a hands-on partner to AWS ML services teams, to be involved in pre-silicon design, bring new products / optimizations / features to market, and many other exciting projects to ensure the Neuron SDK exceeds our customers' needs of high performance, low cost, and ease of use.
You will have deep knowledge of resource management, scheduling, code generation, optimization, and new instruction architectures including CPU, NPU, GPU and novel forms of compute.
Explore the Product :
In order to be considered for this role, candidates must be currently located or willing to relocate to Toronto.
BASIC QUALIFICATIONS
- 3+ years of engineering team management experience
- 6+ years of working directly within engineering teams experience
- 4+ years of designing or architecting (design patterns, reliability and scaling) of new and existing systems experience
- Experience partnering with product or program management teams
- Excellent software design fundamentals, knowledge of software engineering principles, and a deep understanding of compilers (resource management, instruction scheduling, code generation, and compute graph optimization
PREFERRED QUALIFICATIONS
- M.S. or Ph.D. in Computer Science or related technical field
- Experience with toolchains (LLVM, GCC) and code generation techniques for new hardware
- Knowledge of compiler internals from front end to run-time environment with emphasis on AI acceleration