Follow
Jing Liu
Jing Liu
PhD Candidate, Monash University
Verified email at monash.edu - Homepage
Title
Cited by
Cited by
Year
Discrimination-aware channel pruning for deep neural networks
Z Zhuang, M Tan, B Zhuang, J Liu, Y Guo, Q Wu, J Huang, J Zhu
Advances in neural information processing systems 31, 2018
6842018
Scalable vision transformers with hierarchical pooling
Z Pan, B Zhuang, J Liu, H He, J Cai
Proceedings of the IEEE/cvf international conference on computer vision, 377-386, 2021
1232021
Generative low-bitwidth data free quantization
S Xu, H Li, B Zhuang, J Liu, J Cao, C Liang, M Tan
Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23 …, 2020
1062020
Discrimination-aware Network Pruning for Deep Model Compression
J Liu, B Zhuang, Z Zhuang, Y Guo, J Huang, J Zhu, M Tan
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 1, 1-15, 2021
962021
Less is more: Pay less attention in vision transformers
Z Pan, B Zhuang, H He, J Liu, J Cai
Proceedings of the AAAI Conference on Artificial Intelligence 36 (2), 2035-2043, 2022
522022
Effective training of convolutional neural networks with low-bitwidth weights and activations
B Zhuang, M Tan, J Liu, L Liu, I Reid, C Shen
IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (10), 6140 …, 2021
372021
Aqd: Towards accurate quantized object detection
P Chen, J Liu, B Zhuang, M Tan, C Shen
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2021
282021
Pruning self-attentions into convolutional layers in single path
H He, J Cai, J Liu, Z Pan, J Zhang, D Tao, B Zhuang
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024
272024
A survey on efficient training of transformers
B Zhuang, J Liu, Z Pan, H He, Y Weng, C Shen
arXiv preprint arXiv:2302.01107, 2023
212023
Ecoformer: Energy-saving attention with linear complexity
J Liu, Z Pan, H He, J Cai, B Zhuang
NeurIPS Spotlight, 2022
172022
Deep Transferring Quantization
MT Zheng Xie, Zhiquan Wen, Jing Liu, Zhiqiang Liu, Xixian Wu
European Conference on Computer Vision (ECCV) 2020, 2020
15*2020
Conditional automated channel pruning for deep neural networks
Y Liu, Y Guo, J Guo, L Jiang, J Chen
IEEE Signal Processing Letters 28, 1275-1279, 2021
142021
Mesa: A memory-saving training framework for transformers
Z Pan, P Chen, H He, J Liu, J Cai, B Zhuang
arXiv preprint arXiv:2111.11124, 2021
132021
Sharpness-aware quantization for deep neural networks
J Liu, J Cai, B Zhuang
arXiv preprint arXiv:2111.12273, 2021
122021
Ptqd: Accurate post-training quantization for diffusion models
Y He, L Liu, J Liu, W Wu, H Zhou, B Zhuang
Advances in Neural Information Processing Systems 36, 2024
102024
Qllm: Accurate and efficient low-bitwidth quantization for large language models
J Liu, R Gong, X Wei, Z Dong, J Cai, B Zhuang
arXiv preprint arXiv:2310.08041, 2023
72023
Single-path bit sharing for automatic loss-aware model compression
J Liu, B Zhuang, P Chen, C Shen, J Cai, M Tan
TPAMI, 2023
6*2023
Dynamic Focus-aware Positional Queries for Semantic Segmentation
H He, J Cai, Z Pan, J Liu, J Zhang, D Tao, B Zhuang
CVPR 2023, 2022
42022
FocusFormer: Focusing on What We Need via Architecture Sampler
J Liu, J Cai, B Zhuang
arXiv preprint arXiv:2208.10861, 2022
32022
Downscaling and Overflow-aware Model Compression for Efficient Vision Processors
H Li, J Liu, L Jia, Y Liang, Y Wang, M Tan
2022 IEEE 42nd International Conference on Distributed Computing Systems …, 2022
32022
The system can't perform the operation now. Try again later.
Articles 1–20