Shaoduo Gan
Shaoduo Gan
Verified email at inf.ethz.ch
Title
Cited by
Cited by
Year
Communication compression for decentralized training
H Tang, S Gan, C Zhang, T Zhang, J Liu
NeurIPS 2018, 7652-7662, 2018
1102018
1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed
H Tang, S Gan, AA Awan, S Rajbhandari, C Li, X Lian, J Liu, C Zhang, ...
arXiv preprint arXiv:2102.02888, 2021
2021
Ease. ML: A Lifecycle Management System for Machine Learning
L Aguilar Melgar, D Dao, S Gan, NM Gürel, N Hollenstein, J Jiang, ...
11th Annual Conference on Innovative Data Systems Research (CIDR 2021)(virtual), 2021
2021
APMSqueeze: A Communication Efficient Adam-Preconditioned Momentum SGD Algorithm
H Tang, S Gan, S Rajbhandari, X Lian, C Zhang, J Liu, Y He
arXiv preprint arXiv:2008.11343, 2020
2020
Multi-Step Decentralized Domain Adaptation
A Mathur, S Gan, A Isopoussu, F Kawsar, N Berthouze, ND Lane
2019
Distributed Asynchronous Domain Adaptation: Towards Making Domain Adaptation More Practical in Real-World Systems
S Gan, A Mathur, A Isopoussu, N Berthouze, ND Lane, F Kawsar
SysML Workshop at NeurIPS 2019, 2019
2019
The system can't perform the operation now. Try again later.
Articles 1–6