Gang Niu
Gang Niu
RIKEN Center for Advanced Intelligence Project
Verified email at postman.riken.jp - Homepage
Title
Cited by
Cited by
Year
Co-teaching: Robust training of deep neural networks with extremely noisy labels
B Han, Q Yao, X Yu, G Niu, M Xu, W Hu, IW Tsang, M Sugiyama
NeurIPS 2018, 2018
6252018
Positive-unlabeled learning with non-negative risk estimator
R Kiryo, G Niu, MC Plessis, M Sugiyama
NeurIPS 2017 (oral), 2017
2342017
Analysis of learning from positive and unlabeled data
MC du Plessis, G Niu, M Sugiyama
NeurIPS 2014, 2014
2302014
How does disagreement help generalization against label corruption?
X Yu, B Han, J Yao, G Niu, IW Tsang, M Sugiyama
ICML 2019, 2019
205*2019
Convex formulation for learning from positive and unlabeled data
MC du Plessis, G Niu, M Sugiyama
ICML 2015, 2015
2012015
Class-prior estimation for learning from positive and unlabeled data
MC du Plessis, G Niu, M Sugiyama
Machine Learning 106 (4), 463--492, 2017
164*2017
Analysis and improvement of policy gradient estimation
T Zhao, H Hachiya, G Niu, M Sugiyama
NeurIPS 2011, 2011
1322011
Masking: A new perspective of noisy supervision
B Han, J Yao, G Niu, M Zhou, IW Tsang, Y Zhang, M Sugiyama
NeurIPS 2018, 2018
1142018
Are anchor points really indispensable in label-noise learning?
X Xia, T Liu, N Wang, B Han, C Gong, G Niu, M Sugiyama
NeurIPS 2019, 2019
962019
Semi-supervised classification based on classification from positive and unlabeled data
T Sakai, MC du Plessis, G Niu, M Sugiyama
ICML 2017, 2017
912017
Does distributionally robust supervised learning give robust classifiers?
W Hu, G Niu, I Sato, M Sugiyama
ICML 2018, 2018
902018
Theoretical comparisons of positive-unlabeled learning against positive-negative learning
G Niu, MC du Plessis, T Sakai, Y Ma, M Sugiyama
NeurIPS 2016, 2016
872016
Information-theoretic semi-supervised metric learning via entropy regularization
G Niu, B Dai, M Yamada, M Sugiyama
ICML 2012, 2012
872012
Information-maximization clustering based on squared-loss mutual information
M Sugiyama, G Niu, M Yamada, M Kimura, H Hachiya
Neural Computation 26 (1), 84--131, 2014
73*2014
Learning from complementary labels
T Ishida, G Niu, W Hu, M Sugiyama
NeurIPS 2017, 2017
672017
Attacks which do not kill training make adversarial learning stronger
J Zhang, X Xu, B Han, G Niu, L Cui, M Sugiyama, M Kankanhalli
ICML 2020, 2020
622020
Squared-loss mutual information regularization: A novel information-theoretic approach to semi-supervised learning
G Niu, W Jitkrittum, B Dai, H Hachiya, M Sugiyama
ICML 2013, 2013
572013
On the minimal supervision for training any binary classifier from only unlabeled data
N Lu, G Niu, AK Menon, M Sugiyama
ICLR 2019, 2019
532019
SIGUA: Forgetting may make learning with noisy labels more robust
B Han, G Niu, X Yu, Q Yao, M Xu, IW Tsang, M Sugiyama
ICML 2020, 2020
51*2020
Classification from pairwise similarity and unlabeled data
H Bao, G Niu, M Sugiyama
ICML 2018, 2018
492018
The system can't perform the operation now. Try again later.
Articles 1–20