Metaphors in Pre-Trained Language Models: Probing and Generalization Across Datasets and Languages E Aghazadeh, M Fayyaz, Y Yaghoobzadeh ACL 2022, 2022 | 59 | 2022 |
DecompX: Explaining Transformers Decisions by Propagating Token Decomposition A Modarressi, M Fayyaz, E Aghazadeh, Y Yaghoobzadeh, MT Pilehvar ACL 2023, 2023 | 23 | 2023 |
Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations M Fayyaz*, E Aghazadeh*, A Modarressi, H Mohebbi, MT Pilehvar BlackboxNLP @ EMNLP 2021, 2021 | 18 | 2021 |
BERT on a Data Diet: Finding Important Examples by Gradient-Based Pruning M Fayyaz*, E Aghazadeh*, A Modarressi*, MT Pilehvar, Y Yaghoobzadeh, ... ENLSP @ NeurIPS2022, 2022 | 14 | 2022 |
From RAGs to rich parameters: Probing how language models utilize external knowledge over parametric information for factual queries H Wadhwa, R Seetharaman, S Aggarwal, R Ghosh, S Basu, S Srinivasan, ... arXiv preprint arXiv:2406.12824, 2024 | 6 | 2024 |
Quantifying reliance on external information over parametric knowledge during Retrieval Augmented Generation (RAG) using mechanistic analysis R Ghosh, R Seetharaman, H Wadhwa, S Aggarwal, S Basu, S Srinivasan, ... arXiv preprint arXiv:2410.00857, 2024 | | 2024 |