Talent.com
AI Systems Engineer – AI Model (Training & Inference)
AI Systems Engineer – AI Model (Training & Inference)Advanced Micro Devices, Inc • MARKHAM, Ontario, Canada
AI Systems Engineer – AI Model (Training & Inference)

AI Systems Engineer – AI Model (Training & Inference)

Advanced Micro Devices, Inc • MARKHAM, Ontario, Canada
17 days ago
Job type
  • Full-time
Job description


WHAT YOU DO AT AMD CHANGES EVERYTHING

At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond.Together, we advance your career.




THE ROLE/PERSON:

The AMD AI Group is looking for a Senior Software Development Engineer to own the end-to-end model execution stack on AMD Instinct GPUs - spanning training infrastructure at scale and high-performance inference serving. This role demands someone who has shipped LLMs on real hardware, written GPU kernels that moved production metrics, and built the systems infrastructure (orchestration, storage, monitoring) that keeps thousands of GPUs productive. You will be instrumental in ensuring AMD GPUs are first-class citizens for frontier model training and inference across current and next-generation Instinct accelerators.

KEY RESPONSIBILITIES:

Training Infrastructure & Enablement

  • Enable and optimize large-scale model training (LLMs, VLMs, MoE architectures) on AMD Instinct GPU clusters, ensuring correctness, reproducibility, and competitive throughput.
  • Build and maintain training infrastructure: job orchestration, distributed checkpointing, data loading pipelines, and storage optimization for multi-thousand GPU clusters on Kubernetes.
  • Debug and resolve training-specific issues including gradient norm explosions, non-deterministic behavior across GPU generations, and compute-communication overlap in distributed training (FSDP, DeepSpeed, Megatron-LM).
  • Optimize RCCL collective communication patterns for training workloads, including all-reduce, all-gather, and reduce-scatter across multi-node topologies.
  • Develop monitoring, alerting, and compliance infrastructure to ensure training cluster health, data security, and SLA adherence at scale.
  • Design and build end-to-end validation and testing infrastructure using proxy workloads, synthetic benchmarks, and configurable workload generators to systematically validate platform readiness across AMD Instinct GPU generations.

Inference Optimization & Serving

  • Write and optimize high-performance GPU kernels (GEMM, attention, quantized matmul, GPTQ/AWQ) in HIP, Triton, and MLIR targeting AMD Instinct architectures, with demonstrated ability to outperform open-source baselines.
  • Drive end-to-end inference enablement on new AMD GPU silicon - be among the first to get frontier models running on each new Instinct generation, creating reproducible guides and reference implementations.
  • Optimize inference serving frameworks (vLLM, SGLang, TorchServe) for AMD GPUs: batching strategies, KV-cache management, speculative decoding, and continuous batching for production throughput/latency targets.
  • Develop novel approaches to inference acceleration, including bio-inspired algorithms, SLM-assisted batching, and custom scheduling strategies that exploit AMD hardware characteristics.
  • Build quantization pipelines (FP8, FP6, FP4, GPTQ, AWQ) for production model deployment, ensuring quality-performance tradeoffs are well-characterized across AMD GPU generations.

Cross-Cutting

  • Collaborate with AMD silicon architecture and pre-silicon teams to provide software feedback and validate software stack integration on next-generation Instinct GPU designs for both training and inference workloads.
  • Build observability and automated analysis tooling: log analysis pipelines, anomaly detection, performance baselining, regression detection, and diagnostic workflows for large-scale GPU clusters.
  • Contribute to the open ROCm ecosystem and AMD's developer experience — SDKs, CI dashboards, documentation, and developer cloud enablement.

REQUIRED EXPERIENCE:

Industry experience shipping production AI/ML infrastructure, with hands-on work spanning both training and inference.

PREFERRED EXPERIENCE:

  • Direct experience enabling frontier models (GPT-4 class) on AMD Instinct hardware end-to-end.
  • Background in building anomaly detection, log analysis, or observability systems for large-scale distributed GPU infrastructure.
  • Familiarity with AMD Instinct MI-series architectures (MI300X, MI350X, MI355X) and RCCL communication library.
  • Contributions to open-source AI frameworks (PyTorch, vLLM, SGLang, DeepSpeed, Megatron-LM).
  • Experience designing validation frameworks, proxy benchmarks, or synthetic workload suites for GPU infrastructure at scale.
  • Experience with pre-silicon software validation or hardware-software co-verification workflows.
  • Publications or patents in HPC, ML systems, or GPU kernel optimization.

PREFERRED ACADEMIC CREDENTIALS:

  • Bachelor’s or Master’s degree or Ph.D in Computer/Software Engineering, Computer Science, or related technical discipline

This role is not eligible for visa sponsorship.

#LI-G11

#LI-HYBRID




Benefits offered are described: AMD benefits at a glance.

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.

AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here.

This posting is for an existing vacancy.

THE ROLE/PERSON:

The AMD AI Group is looking for a Senior Software Development Engineer to own the end-to-end model execution stack on AMD Instinct GPUs - spanning training infrastructure at scale and high-performance inference serving. This role demands someone who has shipped LLMs on real hardware, written GPU kernels that moved production metrics, and built the systems infrastructure (orchestration, storage, monitoring) that keeps thousands of GPUs productive. You will be instrumental in ensuring AMD GPUs are first-class citizens for frontier model training and inference across current and next-generation Instinct accelerators.

KEY RESPONSIBILITIES:

Training Infrastructure & Enablement

  • Enable and optimize large-scale model training (LLMs, VLMs, MoE architectures) on AMD Instinct GPU clusters, ensuring correctness, reproducibility, and competitive throughput.
  • Build and maintain training infrastructure: job orchestration, distributed checkpointing, data loading pipelines, and storage optimization for multi-thousand GPU clusters on Kubernetes.
  • Debug and resolve training-specific issues including gradient norm explosions, non-deterministic behavior across GPU generations, and compute-communication overlap in distributed training (FSDP, DeepSpeed, Megatron-LM).
  • Optimize RCCL collective communication patterns for training workloads, including all-reduce, all-gather, and reduce-scatter across multi-node topologies.
  • Develop monitoring, alerting, and compliance infrastructure to ensure training cluster health, data security, and SLA adherence at scale.
  • Design and build end-to-end validation and testing infrastructure using proxy workloads, synthetic benchmarks, and configurable workload generators to systematically validate platform readiness across AMD Instinct GPU generations.

Inference Optimization & Serving

  • Write and optimize high-performance GPU kernels (GEMM, attention, quantized matmul, GPTQ/AWQ) in HIP, Triton, and MLIR targeting AMD Instinct architectures, with demonstrated ability to outperform open-source baselines.
  • Drive end-to-end inference enablement on new AMD GPU silicon - be among the first to get frontier models running on each new Instinct generation, creating reproducible guides and reference implementations.
  • Optimize inference serving frameworks (vLLM, SGLang, TorchServe) for AMD GPUs: batching strategies, KV-cache management, speculative decoding, and continuous batching for production throughput/latency targets.
  • Develop novel approaches to inference acceleration, including bio-inspired algorithms, SLM-assisted batching, and custom scheduling strategies that exploit AMD hardware characteristics.
  • Build quantization pipelines (FP8, FP6, FP4, GPTQ, AWQ) for production model deployment, ensuring quality-performance tradeoffs are well-characterized across AMD GPU generations.

Cross-Cutting

  • Collaborate with AMD silicon architecture and pre-silicon teams to provide software feedback and validate software stack integration on next-generation Instinct GPU designs for both training and inference workloads.
  • Build observability and automated analysis tooling: log analysis pipelines, anomaly detection, performance baselining, regression detection, and diagnostic workflows for large-scale GPU clusters.
  • Contribute to the open ROCm ecosystem and AMD's developer experience — SDKs, CI dashboards, documentation, and developer cloud enablement.

REQUIRED EXPERIENCE:

Industry experience shipping production AI/ML infrastructure, with hands-on work spanning both training and inference.

PREFERRED EXPERIENCE:

  • Direct experience enabling frontier models (GPT-4 class) on AMD Instinct hardware end-to-end.
  • Background in building anomaly detection, log analysis, or observability systems for large-scale distributed GPU infrastructure.
  • Familiarity with AMD Instinct MI-series architectures (MI300X, MI350X, MI355X) and RCCL communication library.
  • Contributions to open-source AI frameworks (PyTorch, vLLM, SGLang, DeepSpeed, Megatron-LM).
  • Experience designing validation frameworks, proxy benchmarks, or synthetic workload suites for GPU infrastructure at scale.
  • Experience with pre-silicon software validation or hardware-software co-verification workflows.
  • Publications or patents in HPC, ML systems, or GPU kernel optimization.

PREFERRED ACADEMIC CREDENTIALS:

  • Bachelor’s or Master’s degree or Ph.D in Computer/Software Engineering, Computer Science, or related technical discipline

This role is not eligible for visa sponsorship.

#LI-G11

#LI-HYBRID

Benefits offered are described: AMD benefits at a glance.

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.

AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here.

This posting is for an existing vacancy.

Create a job alert for this search

AI Systems Engineer – AI Model (Training & Inference) • MARKHAM, Ontario, Canada

Similar jobs

AI Platform Engineer: Build & Evaluate AI Systems

Armilla AIToronto, ON, CA
Full-time

A leading AI-focused startup in Toronto is seeking an experienced AI Engineer to shape the future of AI risk management.In this pivotal role, you will be responsible for building core AI tools and ...Show more

 • Promoted

Principal AI Engineer - Amaris Consulting

Amaris Consultingmarkham, on, ca
Full-time

AI across multiple applications.This role is critical in establishing organization-wide standards and delivering scalable, production-ready AI solutions.You will work closely with cross-functional ...Show more

 • Promoted

Lead AI Systems Engineer

PowerToFlyToronto, Ontario, Canada
Full-time

Transform business processes as a Lead AI Systems Engineer.Spearhead the design and implementation of scalable AI solutions that drive measurable outcomes in a hybrid work model.This position cente...Show more

 • Promoted

AI & Measurement Lead Systems Engineer

Axiomatic-AI Inc.Toronto, Ontario, Canada
Full-time

Axiomatic AI is building a new class of AI systems designed to reason with the rigor of the scientific method.By combining deep learning with formal logic and physics-based modeling, we create veri...Show more

 • Promoted

AI Systems Engineer for Production Solutions

Export Development Canada | Exportation et développement CanadaToronto, ON, CA
Full-time

Become a key player as an AI Systems Engineer in a hybrid setting.Lead the deployment of machine learning models into production while ensuring security and reliability across AI initiatives.This r...Show more

 • Promoted

Senior AI Engineer in Generative Systems

BDO Canada LLPToronto, ON, CA
Full-time

We are looking for an experienced AI Engineer to develop Generative and Agentic AI solutions.Utilize large language models (LLMs) and orchestration frameworks to enhance enterprise AI capabilities....Show more

 • Promoted

Agentic AI Developer

HaysGreater Toronto Area, Canada, Canada
Full-time

Our client is a fast‑growing technology organization building next‑generation AI‑driven products.The team is focused on designing and scaling intelligent, autonomous systems that solve real‑world p...Show more

 • Promoted

AI Systems Engineer – AI Model (Training & Inference)

AMDMarkham, York Region, CA
Full-time

WHAT YOU DO AT AMD CHANGES EVERYTHING.At AMD, our mission is to build great products that accelerate next‑generation computing experiences—from AI and data centers, to PCs, gaming and embedded syst...Show more

 • Promoted

AI Engineer - Agentic Systems Development

PaytmToronto
Full-time

Shape the future of autonomous AI as an AI Platform Engineer.Focus on architecting and enhancing models while integrating safety and reliability principles for production workflows.This senior role...Show more

 • Promoted

Lead AI Engineer: GenAI & Actuarial Systems

Société Financière ManuvieToronto, ON, CA
Full-time

A global financial services leader based in Toronto is seeking an experienced professional to design and implement AI solutions for actuarial applications.You will leverage advanced analytics, pred...Show more

 • Promoted

Forward-Deployed AI Engineer for Agentic GenAI Systems

KyndrylToronto, ON, CA
Full-time

A leading technology services company in Toronto, Ontario, is seeking a Forward Deployed Engineer for Agentic AI.In this role, you'll design and implement innovative AI solutions that address busin...Show more

 • Promoted

AI Systems Engineer – AI Model (Training & Inference)

Advanced Micro DevicesMarkham, York Region, CA
Full-time

WHAT YOU DO AT AMD CHANGES EVERYTHING.At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded syst...Show more

 • Promoted

Lead Generative AI Inference Systems Engineer

CerebrasToronto, ON, CA
Full-time

A leading AI hardware company in Toronto seeks a Senior Software Engineer for its Inference ML Engineering team.This role involves designing APIs and tools for state-of-the-art generative AI models...Show more

 • Promoted

Applied AI Engineer – GenAI Systems

ManulifeToronto, Ontario, Canada
Full-time

Manulife is making a significant investment in Advanced Analytics and GenAI to transform how Finance, Treasury, and Actuarial teams make decisions! Our AI team builds practical, governed solutions ...Show more

 • Promoted

AI/ML ENGINEER

AMworkplacemarkham, on, ca
Full-time

AI/ML Engineer — AI Governance Platform.Flexible Hours, Hourly Contract.Mid-level preferred, but strong junior or senior candidates welcome.We've built an AI governance platform that helps enterpri...Show more

 • Promoted • New!

Senior AI & Systems Engineer — Hybrid R&D Innovator

Magna International Inc.Toronto, ON, CA
Full-time

A leading automotive technology company in Toronto is looking for a Senior Research Engineer to define and develop AI tools for system engineering processes.Responsibilities include evaluating tech...Show more

 • Promoted

Applied AI Engineer – GenAI Systems

Manulife FinancialToronto, ON, CA
Full-time

Manulife is making a significant investment in Advanced Analytics and GenAI to transform how Finance, Treasury, and Actuarial teams make decisions! Our AI team builds practical, governed solutions ...Show more

 • Promoted

Remote Research Engineer - Decentralized AI Systems

Yotta LabsToronto, ON, CA
Remote
Full-time

A leading tech company is seeking a Research Engineer specializing in decentralized AI systems.The role involves designing efficient workload orchestration for AI applications across a global netwo...Show more