James A. Michaelov

Orcid: 0000-0003-2913-1103

According to our database1, James A. Michaelov authored at least 13 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of five.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Revenge of the Fallen? Recurrent Models Match Transformers at Predicting Human Language Comprehension Metrics.
CoRR, 2024

2023
So Cloze Yet So Far: N400 Amplitude Is Better Predicted by Distributional Information Than Human Predictability Judgements.
IEEE Trans. Cogn. Dev. Syst., September, 2023

Do Large Language Models Know What Humans Know?
Cogn. Sci., July, 2023

Crosslingual Structural Priming and the Pre-Training Dynamics of Bilingual Language Models.
CoRR, 2023

Structural Priming Demonstrates Abstract Grammatical Representations in Multilingual Language Models.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

Emergent Inabilities? Inverse Scaling Over the Course of Pretraining.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

Can Peanuts Fall in Love with Distributional Semantics?
Proceedings of the 45th Annual Meeting of the Cognitive Science Society, 2023

Rarely a problem? Language models exhibit inverse scaling in their predictions following few-type quantifiers.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

2022
Collateral facilitation in humans and language models.
Proceedings of the 26th Conference on Computational Natural Language Learning, 2022

Do Language Models Make Human-like Predictions about the Coreferents of Italian Anaphoric Zero Pronouns?
Proceedings of the 29th International Conference on Computational Linguistics, 2022

Distrubutional Semantics Still Can't Account for Affordances.
Proceedings of the 44th Annual Meeting of the Cognitive Science Society, 2022

2021
Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude?
Proceedings of the 43rd Annual Meeting of the Cognitive Science Society, 2021

2020
How well does surprisal explain N400 amplitude under different experimental conditions?
Proceedings of the 24th Conference on Computational Natural Language Learning, 2020


  Loading...