AdapLeR: Speeding up Inference by Adaptive Length Reduction A Modarressi, H Mohebbi, MT Pilehvar ACL 2022, 1–15, 2022 | 29 | 2022 |
Quantifying Context Mixing in Transformers H Mohebbi, W Zuidema, G Chrupała, A Alishahi EACL 2023, 3378–3400, 2023 | 26 | 2023 |
Exploring the Role of BERT Token Representations to Explain Sentence Probing Results H Mohebbi, A Modarressi, MT Pilehvar EMNLP 2021, 792–806, 2021 | 26 | 2021 |
Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations M Fayyaz, E Aghazadeh, A Modarressi, H Mohebbi, MT Pilehvar BlackboxNLP 2021, 375–388, 2021 | 18 | 2021 |
Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers H Mohebbi, G Chrupała, W Zuidema, A Alishahi EMNLP 2023, 8249–8260, 2023 | 11 | 2023 |
Decoderlens: Layerwise interpretation of encoder-decoder transformers A Langedijk, H Mohebbi, G Sarti, W Zuidema, J Jumelet Findings of NAACL 2024, 4764–4780, 2023 | 9 | 2023 |
Transformer-specific Interpretability H Mohebbi, J Jumelet, M Hanna, A Alishahi, W Zuidema Proceedings of the 18th Conference of the European Chapter of the …, 2024 | 6 | 2024 |
Nexus2D Team Description Paper MA Esfahani, M Ghafouri, M Jamili, S Askari, R Etemadi, H Mohebbi, ... RoboCup 2017 Symposium and Competition, 2017 | 5 | 2017 |
The Convexity of BERT: From Cause to Solution H Mohebbi, SA Modarressi | 1 | 2020 |
Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP Y Belinkov, N Kim, J Jumelet, H Mohebbi, A Mueller, H Chen Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting …, 2024 | | 2024 |
How Language Models Prioritize Contextual Grammatical Cues? H Amirzadeh, A Alishahi, H Mohebbi BlackboxNLP 2024, 315–336, 2024 | | 2024 |
Disentangling Textual and Acoustic Features of Neural Speech Representations H Mohebbi, G Chrupała, W Zuidema, A Alishahi, I Titov arXiv preprint arXiv:2410.03037, 2024 | | 2024 |