Abstract

Distributed training of deep learning models on large-scale training data is typically conducted with asynchronous stochastic optimization to maximize the rate of updates, at the cost of additional noise introduced from asynchrony. In contrast, the synchronous approach is often thought to be impractical due to idle time wasted on waiting for straggling workers. We revisit these conventional beliefs in this paper, and examine the weaknesses of both approaches. We demonstrate that a third approach, synchronous optimization with backup workers, can avoid asynchronous noise while mitigating for the worst stragglers. Our approach is empirically validated and shown to converge faster and to better test accuracies.

Keywords

Computer scienceDistributed computing

Related Publications

Publication Info

Year
2017
Type
preprint
Citations
609
Access
Closed

External Links

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

609
OpenAlex

Cite This

Jianmin Chen, Rajat Monga, Samy Bengio et al. (2017). Revisiting Distributed Synchronous SGD. arXiv (Cornell University) . https://doi.org/10.48550/arxiv.1702.05800

Identifiers

DOI
10.48550/arxiv.1702.05800