William Merrill

According to our database1, William Merrill authored at least 42 papers between 2001 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
What Formal Languages Can Transformers Express? A Survey.
Trans. Assoc. Comput. Linguistics, 2024

Let's Think Dot by Dot: Hidden Computation in Transformer Language Models.
CoRR, 2024

OLMo: Accelerating the Science of Language Models.
CoRR, 2024

How Language Model Hallucinations Can Snowball.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

The Illusion of State in State-Space Models.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

The Expressive Power of Transformers with Chain of Thought.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Evaluating n-Gram Novelty of Language Models Using Rusty-DAWG.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

Can You Learn Semantics Through Next-Word Prediction? The Case of Entailment.
Proceedings of the Findings of the Association for Computational Linguistics, 2024


2023
Transparency Helps Reveal When Language Models Learn Meaning.
Trans. Assoc. Comput. Linguistics, 2023

The Parallelism Tradeoff: Limitations of Log-Precision Transformers.
Trans. Assoc. Comput. Linguistics, 2023

Transformers as Recognizers of Formal Languages: A Survey on Expressivity.
CoRR, 2023

A Tale of Two Circuits: Grokking as Competition of Sparse and Dense Subnetworks.
CoRR, 2023

A Logic for Expressing Log-Precision Transformers.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Formal languages and neural models for learning on sequences.
Proceedings of the International Conference on Grammatical Inference, 2023

Formal Languages and the NLP Black Box.
Proceedings of the Developments in Language Theory - 27th International Conference, 2023

2022
Saturated Transformers are Constant-Depth Threshold Circuits.
Trans. Assoc. Comput. Linguistics, 2022

Transformers Implement First-Order Logic with Majority Quantifiers.
CoRR, 2022

Log-Precision Transformers are Constant-Depth Uniform Threshold Circuits.
CoRR, 2022

Extracting Finite Automata from RNNs Using State Merging.
CoRR, 2022

Entailment Semantics Can Be Extracted from an Ideal Language Model.
Proceedings of the 26th Conference on Computational Natural Language Learning, 2022

ReCLIP: A Strong Zero-Shot Baseline for Referring Expression Comprehension.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022

2021
Provable Limitations of Acquiring Meaning from Ungrounded Form: What Will Future Language Models Understand?
Trans. Assoc. Comput. Linguistics, 2021

On the Power of Saturated Transformers: A View from Circuit Complexity.
CoRR, 2021

Formal Language Theory Meets Modern NLP.
CoRR, 2021

Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021

Competency Problems: On Finding and Removing Artifacts in Language Data.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021

2020
Parameter Norm Growth During Training of Transformers.
CoRR, 2020

CORD-19: The Covid-19 Open Research Dataset.
CoRR, 2020

On the Linguistic Capacity of Real-Time Counter Automata.
CoRR, 2020

A Formal Hierarchy of RNN Architectures.
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020

2019
Sequential Neural Networks as Automata.
CoRR, 2019

Finding Syntactic Representations in Neural Stacks.
CoRR, 2019

Finding Hierarchical Structure in Neural Stacks Using Unsupervised Parsing.
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, 2019

Detecting Syntactic Change Using a Neural Part-of-Speech Tagger.
Proceedings of the 1st International Workshop on Computational Approaches to Historical Language Change, 2019

2018
End-to-End Graph-Based TAG Parsing with Neural Networks.
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2018

Context-Free Transductions with Neural Stacks.
Proceedings of the Workshop: Analyzing and Interpreting Neural Networks for NLP, 2018

Using Machine Learning to Understand Transfer from First Language to Second Language.
Proceedings of the 40th Annual Meeting of the Cognitive Science Society, 2018

2010
Where is the return on investment in wireless sensor networks?
IEEE Wirel. Commun., 2010

2004
Methods for Scalable Self-Assembly of Ad Hoc Wireless Sensor Networks.
IEEE Trans. Mob. Comput., 2004

Dynamic Networking and Smart Sensing Enable Next-Generation Landmines.
IEEE Pervasive Comput., 2004

2001
Preserving and Protecting the Freedom to Learn Online.
Proceedings of WebNet 2001, 2001


  Loading...