About Cerebras Systems
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer‑scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of one device. This enables industry‑leading training and inference speeds and lets machine learning users run large‑scale ML applications without the hassle of managing hundreds of GPUs or TPUs.
Our customers include global corporations, national labs and top‑tier healthcare systems. In 2024 we launched Cerebras Inference, the fastest generative AI inference solution, over 10 times faster than GPU‑based hyperscale cloud inference services.
About The Role
We are seeking a highly skilled Deployment Engineer to build and operate cutting‑edge inference clusters on the world’s largest computer chip, the Wafer‑Scale Engine (WSE). You will play a critical role in ensuring reliable, efficient, and scalable deployment of AI inference workloads across our global infrastructure. On the operational side, you’ll own the rollout of new software versions, AI replica updates, and capacity reallocations across our custom‑built, high‑capacity datacenters. Beyond operations, you’ll drive improvements to telemetry, observability, and fully automated pipelines using advanced allocation strategies to maximize utilization of large‑scale computer fleets.
The ideal candidate combines hands‑on operation rigor with strong systems engineering skills and thrives on building resilient pipelines that keep pace with cutting‑edge AI models.
This role does not require 24 / 7 hour on‑call rotations.
Responsibilities
Deploy AI inference replicas and cluster software across multiple datacenters
Maximize capacity allocation and optimize replica placement using constraint‑solver algorithms
Operate bare‑metal inference infrastructure while supporting transition to K8S‑based platform
Develop and extend telemetry, observability and alerting solutions to ensure deployment reliability at scale
Develop and extend a fully automated deployment pipeline to support fast software updates and capacity reallocation at scale
Translate technical and customer needs into actionable requirements for the Dev Infra, Cluster, Platform and Core teams
Stay up to date with the latest advancements in AI compute infrastructure and related technologies
Skills And Requirements
2–5 years of experience operating on‑prem compute infrastructure (ideally in Machine Learning or High‑Performance Compute) or developing and managing complex AWS‑based infrastructure for hybrid deployments
Strong proficiency in Python for automation, orchestration, and deployment tooling
Solid understanding of Linux‑based systems and command‑line tools
Extensive knowledge of Docker containers and container orchestration platforms like K8S
Familiarity with spine‑leaf (Clos) networking architecture
Proficiency with telemetry and observability stacks such as Prometheus, InfluxDB and Grafana
Strong ownership mindset and accountability for complex deployments
Ability to work effectively in a fast‑paced environment
Location
SF Bay Area
Toronto
Why Join Cerebras
Build a breakthrough AI platform beyond the constraints of the GPU
Publish and open‑source cutting‑edge AI research
Work on one of the fastest AI supercomputers in the world
Enjoy job stability with startup vitality
Our simple, non‑corporate work culture respects individual beliefs
Apply Today and Become Part of the Forefront of Groundbreaking Advancements in AI!
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. Inclusive teams build better products and companies, and we empower people to do their best work through continuous learning, growth, and support of those around them.
#J-18808-Ljbffr
Deployment Engineer AI Inference • Toronto, Canada