We bring together leading minds in communications technology to unify the telecom industry with simplicity, scalability, and efficiency.
This is more than a job—it’s a chance to revolutionize global connectivity.
Aduna, founded by global leaders, opens mobile networks worldwide to developers, fueling innovation and a programmable ecosystem.
We are a scale‑up : agile, fast, and collaborative. Our breakthroughs impact people, businesses, and societies worldwide.
Ready to create what’s next?
As an MLOps Engineer in Aduna’s Cortex team , you’ll design and own the pipelines, infrastructure, and processes that bring AI models from research to secure, scalable production. You’ll work closely with the Head of AI & Innovation in a lean team where decisions move quickly. If you want to shape AI infrastructure powering mission‑critical applications in automation, secure networks, and large‑scale platforms, this is your role.
Responsibilities
- Design, build, and maintain scalable MLOps pipelines across software, network security, and operational AI use cases .
- Implement automated CI / CD for training, evaluation, deployment, and rollback of ML models.
- Develop robust monitoring for drift, performance, latency, and data quality (Grafana, Prometheus).
- Ensure compliance with security and privacy regulations (ISO GDPR, AI act readiness).
- Automate data prep, model training, evaluation, and retraining pipelines.
- Drive best practices for reproducibility, versioning, and governance.
- Continuously evaluate and integrate cutting‑edge tools and frameworks.
Cross‑Functional Collaboration
Partner with AI scientists, SW engineers, and DevSecOps for seamless model integration.Build and maintain GPU and multi‑cloud inference infrastructure.Must‑Have Qualifications
5+ years in MLOps / ML infrastructure with ownership of production lifecycles.Strong Python and experience with ML frameworks (TensorFlow, PyTorch, Scikit‑learn).Expertise in Docker, Kubernetes, Terraform for scalable AI workloads.Experience building CI / CD pipelines (GitHub Actions, GitLab CI, Jenkins) and ML orchestration (MLflow, Kubeflow).Experience deploying models on AWS or Azure with GPU workflows.Strong background in observability, monitoring, drift detection, retraining.Familiarity with distributed training / inference and optimization for large‑scale AI.Nice‑to‑Have
Experience with LLMOps, federated learning, privacy‑preserving ML, agent‑based AI (LangChain, Autogen).Familiarity with multi‑cloud or edge‑cloud deployments.Knowledge of secure AI practices and compliance frameworks.Open‑source contributions in ML / MLOps tooling.Why Aduna?
Work directly with the Head of AI & Innovation on high‑impact projects.Lean team : rapid execution from design to production.Shape AI for mission‑critical, secure, real‑time systems.Balance remote work with in‑office collaboration.Strong culture of technical growth, mentorship, and autonomy.Competitive salary, full benefits (health, RRSP / DPSP), hybrid model (3 days Montreal).Unlimited PTO — we trust you to manage your time.Seniority level
Mid‑Senior levelEmployment type
Full‑timeJob function
Engineering and Information TechnologyIndustries
Computer Networking Products, IT System Custom Software Development, and Software DevelopmentAduna is committed to diversity and inclusion. We welcome applicants from all backgrounds passionate about driving the next wave of telecom innovation.
#J-18808-Ljbffr