OptiReduce: Resilient and Tail-Optimal AllReduce for Distributed Deep Learning in the Cloud

Authors: 

Ertza Warraich, Purdue University; Omer Shabtai and Khalid Manaa, Nvidia; Shay Vargaftik, VMware Research; Yonatan Piasetzky and Matty Kadosh, Nvidia; Lalith Suresh, Feldera; Muhammad Shahbaz, University of Michigan

Abstract: 

We present OptiReduce, a new collective-communication system for the cloud with bounded, predictable completion times for deep-learning jobs in the presence of varying computation (stragglers) and communication (congestion and gradient drops) variabilities. OptiReduce exploits the inherent resiliency and the stochastic nature of distributed deep-learning (DDL) training and fine-tuning to work with approximated (or lost) gradients—providing an efficient balance between (tail) performance and the resulting accuracy of the trained models.

Exploiting this domain-specific characteristic of DDL, OptiReduce introduces (1) mechanisms (e.g., unreliable bounded transport with adaptive timeout) to improve the DDL jobs’ tail execution time, and (2) strategies (e.g., Transpose AllReduce and Hadamard Transform) to mitigate the impact of gradient drops on model accuracy. Our evaluation shows that OptiReduce achieves 70% and 30% faster time-to-accuracy (TTA), on average, when operating in shared, cloud environments (e.g., CloudLab) compared to Gloo and NCCL, respectively.

NSDI '25 Open Access Sponsored by
King Abdullah University of Science and Technology (KAUST)

Open Access Media

USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access.

BibTeX
@inproceedings {305995,
author = {Ertza Warraich and Omer Shabtai and Khalid Manaa and Shay Vargaftik and Yonatan Piasetzky and Matty Kadosh and Lalith Suresh and Muhammad Shahbaz},
title = {{OptiReduce}: Resilient and {Tail-Optimal} {AllReduce} for Distributed Deep Learning in the Cloud},
booktitle = {22nd USENIX Symposium on Networked Systems Design and Implementation (NSDI 25)},
year = {2025},
isbn = {978-1-939133-46-5},
address = {Philadelphia, PA},
pages = {685--703},
url = {https://www.usenix.org/conference/nsdi25/presentation/warraich},
publisher = {USENIX Association},
month = apr
}

Presentation Video