Tom McCoy

Affiliations:
  • Princeton University, NJ, USA
  • Johns Hopkins University, Baltimore, MD, USA (former)
  • Yale University, New Haven, CT, USA (former)


According to our database1, Tom McCoy authored at least 34 papers between 2017 and 2024.

Collaborative distances:

Timeline

2017
2018
2019
2020
2021
2022
2023
2024
0
1
2
3
4
5
6
7
8
5
5
2
1
2
2
1
1
6
5
2
2

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Minimization of Boolean Complexity in In-Context Concept Learning.
CoRR, 2024

When a language model is optimized for reasoning, does it still show embers of autoregression? An analysis of OpenAI o1.
CoRR, 2024

Is In-Context Learning a Type of Gradient-Based Learning? Evidence from the Inverse Frequency Effect in Structural Priming.
CoRR, 2024

modeLing: A Novel Dataset for Testing Linguistic Reasoning in Language Models.
CoRR, 2024

Distilling Symbolic Priors for Concept Learning into Neural Networks.
CoRR, 2024

ModeLing: A Novel Dataset for Testing Linguistic Reasoning in Language Models.
Proceedings of the 6th Workshop on Research in Computational Linguistic Typology and Multilingual NLP, 2024

Deciphering the Factors Influencing the Efficacy of Chain-of-Thought: Probability, Memorization, and Noisy Reasoning.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024, 2024

2023
How Much Do Language Models Copy From Their Training Data? Evaluating Linguistic Novelty in Text Generation Using RAVEN.
Trans. Assoc. Comput. Linguistics, 2023

Deep de Finetti: Recovering Topic Distributions from Large Language Models.
CoRR, 2023

Bayes in the age of intelligent machines.
CoRR, 2023

Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve.
CoRR, 2023

Modeling rapid language learning by distilling Bayesian priors into artificial neural networks.
CoRR, 2023

How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages.
CoRR, 2022

Neurocompositional Computing: From the Central Paradox of Cognition to a New Generation of AI Systems.
AI Mag., 2022

2021
Infinite use of finite means? Evaluating the generalization of center embedding learned from an artificial grammar.
Proceedings of the 43rd Annual Meeting of the Cognitive Science Society, 2021

2020
Does Syntax Need to Grow on Trees? Sources of Hierarchical Inductive Bias in Sequence-to-Sequence Networks.
Trans. Assoc. Comput. Linguistics, 2020

Picking BERT's Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis.
Proceedings of the 28th International Conference on Computational Linguistics, 2020

Universal linguistic inductive biases via meta-learning.
Proceedings of the 42th Annual Meeting of the Cognitive Science Society, 2020

Discovering the Compositional Structure of Vector Representations with Role Learning Networks.
Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, 2020

BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance.
Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, 2020

Syntactic Data Augmentation Increases Robustness to Inference Heuristics.
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020

Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs.
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020

2019
Probing What Different NLP Tasks Teach Machines about Function Word Comprehension.
Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics, 2019

What do you learn from context? Probing for sentence structure in contextualized word representations.
Proceedings of the 7th International Conference on Learning Representations, 2019

RNNs implicitly implement tensor-product representations.
Proceedings of the 7th International Conference on Learning Representations, 2019

Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling.
Proceedings of the 57th Conference of the Association for Computational Linguistics, 2019

Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference.
Proceedings of the 57th Conference of the Association for Computational Linguistics, 2019

2018
Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling.
CoRR, 2018

Non-entailed subsequences as a challenge for natural language inference.
CoRR, 2018

Parser combinators for Tigrinya and Oromo morphology.
Proceedings of the Eleventh International Conference on Language Resources and Evaluation, 2018

Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks.
Proceedings of the 40th Annual Meeting of the Cognitive Science Society, 2018

2017
Linguistically Rich Vector Representations of Supertags for TAG Parsing.
Proceedings of the 13th International Workshop on Tree Adjoining Grammars and Related Formalisms, 2017

TAG Parsing with Neural Networks and Vector Representations of Supertags.
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2017


  Loading...