Follow
Yifei Huang
Yifei Huang
The University of Tokyo
Verified email at ut-vision.org - Homepage
Title
Cited by
Cited by
Year
Ego4d: Around the world in 3,000 hours of egocentric video
K Grauman, A Westbury, E Byrne, Z Chavis, A Furnari, R Girdhar, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
5172022
Semantic aware attention based deep object co-segmentation
H Chen, Y Huang, H Nakayama
Asian Conference on Computer Vision, 435-450, 2018
1502018
Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition
Y Huang, M Cai, Z Li, Y Sato
Oral presentation, European Conference on Computer Vision (ECCV), 789-804, 2018
1232018
Clrnet: Cross layer refinement network for lane detection
T Zheng, Y Huang, Y Liu, W Tang, Z Yang, D Cai, X He
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2022
1112022
Improving action segmentation via graph-based temporal reasoning
Y Huang, Y Sugano, Y Sato
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2020
1112020
Goal-oriented gaze estimation for zero-shot learning
Y Liu, L Zhou, X Bai, Y Huang, L Gu, J Zhou, T Harada
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2021
1002021
Mutual context network for jointly estimating egocentric gaze and action
Y Huang, M Cai, Z Li, F Lu, Y Sato
IEEE Transactions on Image Processing 29, 7795-7806, 2020
622020
Manipulation-skill assessment from videos with spatial attention network
Z Li, Y Huang, M Cai, Y Sato
Proceedings of the IEEE International Conference on Computer Vision Workshops, 2019
622019
Commonsense knowledge aware concept selection for diverse and informative visual storytelling
H Chen, Y Huang, H Takamura, H Nakayama
Proceedings of the AAAI Conference on Artificial Intelligence 35 (2), 999-1008, 2021
362021
Videollm: Modeling video sequence with large language models
G Chen, YD Zheng, J Wang, J Xu, Y Huang, J Pan, Y Wang, Y Wang, ...
arXiv preprint arXiv:2305.13292, 2023
352023
Towards visually explaining video understanding networks with perturbation
Z Li, W Wang, Z Li, Y Huang, Y Sato
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer …, 2021
302021
Interact before align: Leveraging cross-modal knowledge for domain adaptive action recognition
L Yang, Y Huang, Y Sugano, Y Sato
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2022
292022
Internvideo-ego4d: A pack of champion solutions to ego4d challenges
G Chen, S Xing, Z Chen, Y Wang, K Li, Y Li, Y Liu, J Wang, YD Zheng, ...
arXiv preprint arXiv:2211.09529, 2022
282022
Precise multi-modal in-hand pose estimation using low-precision sensors for robotic assembly
F von Drigalski, K Hayashi, Y Huang, R Yonetani, M Hamaya, K Tanaka, ...
2021 IEEE International Conference on Robotics and Automation (ICRA), 968-974, 2021
232021
Compound Prototype Matching for Few-Shot Action Recognition
Y Huang, L Yang, Y Sato
European Conference on Computer Vision, 351-368, 2022
212022
Ego-exo4d: Understanding skilled human activity from first-and third-person perspectives
K Grauman, A Westbury, L Torresani, K Kitani, J Malik, T Afouras, ...
arXiv preprint arXiv:2311.18259, 2023
132023
An ego-vision system for discovering human joint attention
Y Huang, M Cai, Y Sato
IEEE Transactions on Human-Machine Systems 50 (4), 306-316, 2020
132020
Temporal localization and spatial segmentation of joint attention in multiple first-person videos
Y Huang, M Cai, H Kera, R Yonetani, K Higuchi, Y Sato
Proceedings of the IEEE International Conference on Computer Vision, 2313-2321, 2017
132017
Learn to recover visible color for video surveillance in a day
G Wu, Y Zheng, Z Guo, Z Cai, X Shi, X Ding, Y Huang, Y Guo, R Shibasaki
Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23 …, 2020
122020
Weakly supervised temporal sentence grounding with uncertainty-guided self-training
Y Huang, L Yang, Y Sato
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
102023
The system can't perform the operation now. Try again later.
Articles 1–20