Hongyi Wang
Hongyi Wang
Verified email at andrew.cmu.edu - Homepage
Title
Cited by
Cited by
Year
Atomo: Communication-efficient learning via atomic sparsification
H Wang, S Sievert, Z Charles, S Liu, S Wright, D Papailiopoulos
arXiv preprint arXiv:1806.04090, 2018
1612018
Federated Learning with Matched Averaging
H Wang, M Yurochkin, Y Sun, D Papailiopoulos, Y Khazaeni
ICLR 2020 - International Conference on Learning Representations, 2019
1592019
Draco: Byzantine-resilient distributed training via redundant gradients
L Chen, H Wang, Z Charles, D Papailiopoulos
International Conference on Machine Learning, 903-912, 2018
152*2018
Attack of the tails: Yes, you really can backdoor federated learning
H Wang, K Sreenivasan, S Rajput, H Vishwakarma, S Agarwal, J Sohn, ...
arXiv preprint arXiv:2007.05084, 2020
522020
DETOX: A redundancy-based framework for faster and more robust gradient aggregation
S Rajput, H Wang, Z Charles, D Papailiopoulos
arXiv preprint arXiv:1907.12205, 2019
462019
Fedml: A research library and benchmark for federated machine learning
C He, S Li, J So, X Zeng, M Zhang, H Wang, X Wang, P Vepakomma, ...
arXiv preprint arXiv:2007.13518, 2020
392020
ErasureHead: Distributed Gradient Descent without Delays Using Approximate Gradient Coding
H Wang, Z Charles, D Papailiopoulos
https://arxiv.org/abs/1901.09671, 2019
36*2019
The effect of network width on the performance of large-batch training
L Chen, H Wang, J Zhao, D Papailiopoulos, P Koutris
arXiv preprint arXiv:1806.03791, 2018
162018
A field guide to federated optimization
J Wang, Z Charles, Z Xu, G Joshi, HB McMahan, M Al-Shedivat, G Andrew, ...
arXiv preprint arXiv:2107.06917, 2021
72021
Accordion: Adaptive gradient communication via critical learning regime identification
S Agarwal, H Wang, K Lee, S Venkataraman, D Papailiopoulos
arXiv preprint arXiv:2010.16248, 2020
72020
Recognizing actions during tactile manipulations through force sensing
G Subramani, D Rakita, H Wang, J Black, M Zinn, M Gleicher
2017 IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2017
52017
Pufferfish: Communication-efficient Models At No Extra Cost
H Wang, S Agarwal, D Papailiopoulos
arXiv preprint arXiv:2103.03936, 2021
32021
On the Utility of Gradient Compression in Distributed Training Systems
S Agarwal, H Wang, S Venkataraman, D Papailiopoulos
arXiv preprint arXiv:2103.00543, 2021
32021
Solon: Communication-efficient Byzantine-resilient Distributed Training via Redundant Gradients
L Chen, L Chen, H Wang, S Davidson, E Dobriban
arXiv preprint arXiv:2110.01595, 2021
2021
Demonstration of Nimbus: Model-based Pricing for Machine Learning in a Data Marketplace
L Chen, H Wang, L Chen, P Koutris, A Kumar
Proceedings of the 2019 International Conference on Management of Data, 1885 …, 2019
2019
AVOIDING NEGATIVE TRANSFER ON A FOCUSED TASK WITH DEEP MULTI-TASK REINFORCEMENT LEARNING
S Liu, H Wang, Y Liang, A Gitter
The system can't perform the operation now. Try again later.
Articles 1–16