Nicholas Rhinehart
Cited by
Cited by
PRECOG: PREdiction Conditioned On Goals in Visual Multi-Agent Settings
N Rhinehart, R McAllister, K Kitani, S Levine
Proceedings of the IEEE International Conference on Computer Vision, 2019
R2P2: A Reparameterized Pushforward Policy for Diverse, Precise Generative Path Forecasting
N Rhinehart, KM Kitani, P Vernaza
Proceedings of the European Conference on Computer Vision (ECCV), 772-788, 2018
First-Person Activity Forecasting with Online Inverse Reinforcement Learning
N Rhinehart, KM Kitani
The IEEE International Conference on Computer Vision (ICCV), 3716-3725, 2017
N2N learning: Network to Network Compression via Policy Gradient Reinforcement Learning
A Ashok, N Rhinehart, F Beainy, KM Kitani
International Conference on Learning Representations (ICLR), 2018
Deep Imitative Models for Flexible Inference, Planning, and Control
N Rhinehart, R McAllister, S Levine
International Conference on Learning Representations (ICLR), 2020
Learning Action Maps of Large Environments Via First-Person Vision
N Rhinehart, KM Kitani
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016
Predictive-state decoders: Encoding the future into recurrent networks
A Venkatraman*, N Rhinehart*, W Sun, L Pinto, M Hebert, B Boots, ...
Advances in Neural Information Processing Systems, 1172-1183, 2017
Can autonomous vehicles identify, recover from, and adapt to distribution shifts?
A Filos, P Tigkas, R McAllister, N Rhinehart, S Levine, Y Gal
International Conference on Machine Learning, 3145-3153, 2020
Directed-Info GAIL: Learning Hierarchical Policies from Unsegmented Demonstrations using Directed Information
A Sharma, M Sharma, N Rhinehart, KM Kitani
International Conference on Learning Representations (ICLR), 2019
Inverting the Pose Forecasting Pipeline with SPF2: Sequential Pointcloud Forecasting for Sequential Pose Forecasting
X Weng, J Wang, S Levine, K Kitani, N Rhinehart
arXiv preprint arXiv:2003.08376, 2020
First-Person Activity Forecasting from Video with Online Inverse Reinforcement Learning
N Rhinehart, K Kitani
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018
Parrot: Data-driven behavioral priors for reinforcement learning
A Singh, H Liu, G Zhou, A Yu, N Rhinehart, S Levine
arXiv preprint arXiv:2011.10024, 2020
SMiRL: Surprise Minimizing RL in Dynamic Environments
G Berseth, D Geng, C Devin, N Rhinehart, C Finn, D Jayaraman, S Levine
arXiv preprint arXiv:1912.05510, 2019
Generative Hybrid Representations for Activity Forecasting with No-Regret Learning
J Guan, Y Yuan, KM Kitani, N Rhinehart
arXiv preprint arXiv:1904.06250, 2019
Conservative safety critics for exploration
H Bharadhwaj, A Kumar, N Rhinehart, S Levine, F Shkurti, A Garg
arXiv preprint arXiv:2010.14497, 2020
Human-Interactive Subgoal Supervision for Efficient Inverse Reinforcement Learning
X Pan, E Ohn-Bar, N Rhinehart, Y Xu, Y Shen, KM Kitani
Proceedings of the 17th International Conference on Autonomous Agents and …, 2018
Visual chunking: A list prediction framework for region-based object detection
N Rhinehart, J Zhou, M Hebert, JA Bagnell
2015 IEEE International Conference on Robotics and Automation (ICRA), 5448-5454, 2015
ViNG: Learning Open-World Navigation with Visual Goals
D Shah, B Eysenbach, G Kahn, N Rhinehart, S Levine
arXiv preprint arXiv:2012.09812, 2020
Traffic prediction with reparameterized pushforward policy for autonomous vehicles
P Vernaza, N Rhinehart
US Patent App. 16/266,713, 2019
Explore and Control with Adversarial Surprise
A Fickinger, N Jaques, S Parajuli, M Chang, N Rhinehart, G Berseth, ...
arXiv preprint arXiv:2107.07394, 2021
The system can't perform the operation now. Try again later.
Articles 1–20