Jaap Jumelet
According to our database1,
Jaap Jumelet
authored at least 21 papers
between 2018 and 2024.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
On csauthors.net:
Bibliography
2024
Filtered Corpus Training (FiCT) Shows that Language Models can Generalize from Indirect Evidence.
CoRR, 2024
Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2024, 2024
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024
Proceedings of the Findings of the Association for Computational Linguistics, 2024
2023
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models.
Trans. Mach. Learn. Res., 2023
ChapGTP, ILLC's Attempt at Raising a BabyLM: Improving Data Efficiency by Automatic Task Formation.
CoRR, 2023
Transparency at the Source: Evaluating and Interpreting Language Models With Access to the True Distribution.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023
Attribution and Alignment: Effects of Local Context Repetition on Utterance Production and Comprehension in Dialogue.
Proceedings of the 27th Conference on Computational Natural Language Learning, 2023
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023
2022
Structural Persistence in Language Models: Priming as a Window into Abstract Language Representations.
Trans. Assoc. Comput. Linguistics, 2022
The Birth of Bias: A case study on the evolution of gender bias in an English language model.
CoRR, 2022
2021
Syntactic Persistence in Language Models: Priming as a Window into Abstract Language Representations.
CoRR, 2021
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, 2021
Proceedings of the Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, 2021
Proceedings of Deep Learning Inside Out: The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, 2021
2020
Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, 2020
2019
Analysing Neural Language Models: Contextual Decomposition Reveals Default Reasoning in Number and Gender Assignment.
Proceedings of the 23rd Conference on Computational Natural Language Learning, 2019
2018
Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items.
Proceedings of the Workshop: Analyzing and Interpreting Neural Networks for NLP, 2018