Follow
Hosein Mohebbi
Title
Cited by
Cited by
Year
AdapLeR: Speeding up Inference by Adaptive Length Reduction
A Modarressi, H Mohebbi, MT Pilehvar
ACL 2022, 1–15, 2022
292022
Quantifying Context Mixing in Transformers
H Mohebbi, W Zuidema, G Chrupała, A Alishahi
EACL 2023, 3378–3400, 2023
262023
Exploring the Role of BERT Token Representations to Explain Sentence Probing Results
H Mohebbi, A Modarressi, MT Pilehvar
EMNLP 2021, 792–806, 2021
262021
Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations
M Fayyaz, E Aghazadeh, A Modarressi, H Mohebbi, MT Pilehvar
BlackboxNLP 2021, 375–388, 2021
182021
Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers
H Mohebbi, G Chrupała, W Zuidema, A Alishahi
EMNLP 2023, 8249–8260, 2023
112023
Decoderlens: Layerwise interpretation of encoder-decoder transformers
A Langedijk, H Mohebbi, G Sarti, W Zuidema, J Jumelet
Findings of NAACL 2024, 4764–4780, 2023
92023
Transformer-specific Interpretability
H Mohebbi, J Jumelet, M Hanna, A Alishahi, W Zuidema
Proceedings of the 18th Conference of the European Chapter of the …, 2024
62024
Nexus2D Team Description Paper
MA Esfahani, M Ghafouri, M Jamili, S Askari, R Etemadi, H Mohebbi, ...
RoboCup 2017 Symposium and Competition, 2017
52017
The Convexity of BERT: From Cause to Solution
H Mohebbi, SA Modarressi
12020
Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
Y Belinkov, N Kim, J Jumelet, H Mohebbi, A Mueller, H Chen
Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting …, 2024
2024
How Language Models Prioritize Contextual Grammatical Cues?
H Amirzadeh, A Alishahi, H Mohebbi
BlackboxNLP 2024, 315–336, 2024
2024
Disentangling Textual and Acoustic Features of Neural Speech Representations
H Mohebbi, G Chrupała, W Zuidema, A Alishahi, I Titov
arXiv preprint arXiv:2410.03037, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–12