首页
如何读论文?
读论文
ICLR 2018 | Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
读论文
梯度压缩
Large Sacle Distributed Deep Networks
读论文
参数服务器
Aluminum: An Asynchronous, GPU-Aware Communication Library Optimized for Large-Scale Training of Dee
读论文
mpi
CoRR 2015 | MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed S
读论文
KVStore
mxnet
EMNLP 2017 | Sparse Communication for Distributed Gradient Descent
读论文
梯度压缩
CoRR 2018 | Horovod: Fast and Easy Distributed Deep Learning in Tensorflow
读论文
tensorflow
Horovod
mxnet
INTERSPEECH 2015 | Scalable Distributed DNN Training Using Commodity GPU Cloud Computing
读论文
梯度压缩
INTERSPEECH 2014 | 1-Bit Stochastic Gradient Descent and its Application to Data-Parallel Distribute
读论文
机器学习
梯度压缩
NeurIPS 2017 | QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding
读论文
梯度压缩
MLHPC 2016 | Communication Quantization for Data-parallel Training of Deep Neural Networks
读论文
梯度压缩
NeurIPS 2017 | TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
读论文
梯度压缩
标签