首页
ICLR 2018 | Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
读论文
梯度压缩
EMNLP 2017 | Sparse Communication for Distributed Gradient Descent
读论文
梯度压缩
INTERSPEECH 2015 | Scalable Distributed DNN Training Using Commodity GPU Cloud Computing
读论文
梯度压缩
INTERSPEECH 2014 | 1-Bit Stochastic Gradient Descent and its Application to Data-Parallel Distribute
读论文
机器学习
梯度压缩
NeurIPS 2017 | QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding
读论文
梯度压缩
MLHPC 2016 | Communication Quantization for Data-parallel Training of Deep Neural Networks
读论文
梯度压缩
NeurIPS 2017 | TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
读论文
梯度压缩
标签