Tianxing He
Tianxing He
Verified email at csail.mit.edu - Homepage
TitleCited byYear
Reshaping deep neural network for fast decoding by node-pruning
T He, Y Fan, Y Qian, T Tan, K Yu
2014 IEEE International Conference on Acoustics, Speech and Signal …, 2014
822014
On training bi-directional neural network language model with noise contrastive estimation
T He, Y Zhang, J Droppo, K Yu
2016 10th International Symposium on Chinese Spoken Language Processing …, 2016
102016
Detecting egregious responses in neural sequence-to-sequence models
T He, J Glass
arXiv preprint arXiv:1809.04113, 2018
92018
Recurrent neural network language model with structured word embeddings for speech recognition
T He, X Xiang, Y Qian, K Yu
2015 IEEE International Conference on Acoustics, Speech and Signal …, 2015
82015
Exploiting LSTM structure in deep neural networks for speech recognition
T He, J Droppo
2016 IEEE International Conference on Acoustics, Speech and Signal …, 2016
62016
Automatic model redundancy reduction for fast back-propagation for deep neural networks in speech recognition
Y Qian, T He, W Deng, K Yu
2015 International Joint Conference on Neural Networks (IJCNN), 1-6, 2015
62015
Paragraph vector based topic model for language model adaptation
W Jin, T He, Y Qian, K Yu
Sixteenth Annual Conference of the International Speech Communication …, 2015
52015
Multi-view lstm language model with word-synchronized auxiliary feature for lvcsr
Y Wu, T He, Z Chen, Y Qian, K Yu
Chinese Computational Linguistics and Natural Language Processing Based on …, 2017
42017
Quantifying exposure bias for neural language generation
T He, J Zhang, Z Zhou, J Glass
arXiv preprint arXiv:1905.10617, 2019
22019
Negative training for neural dialogue response generation
T He, J Glass
arXiv preprint arXiv:1903.02134, 2019
12019
An investigation on DNN-derived bottleneck features for GMM-HMM based robust speech recognition
Y You, Y Qian, T He, K Yu
2015 IEEE China Summit and International Conference on Signal and …, 2015
12015
Mix-review: Alleviate Forgetting in the Pretrain-Finetune Framework for Neural Language Generation Models
T He, J Liu, K Cho, M Ott, B Liu, J Glass, F Peng
arXiv preprint arXiv:1910.07117, 2019
2019
Why Gradient Clipping Accelerates Training: A Theoretical Justification for Adaptivity
J Zhang, T He, S Sra, A Jadbabaie
International Conference on Learning Representations, 2019
2019
Analysis of Gradient Clipping and Adaptive Scaling with a Relaxed Smoothness Condition
J Zhang, T He, S Sra, A Jadbabaie
arXiv preprint arXiv:1905.11881, 2019
2019
The system can't perform the operation now. Try again later.
Articles 1–14