Jackson Petty

Orcid: 0000-0002-9492-0144

According to our database1, Jackson Petty authored at least 13 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
How Does Code Pretraining Affect Language Model Task Performance?
CoRR, 2024

The Impact of Depth on Compositional Generalization in Transformer Language Models.
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), 2024

In-context Learning Generalizes, But Not Always Robustly: The Case of Syntax.
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), 2024

The Illusion of State in State-Space Models.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

2023
How Abstract Is Linguistic Generalization in Large Language Models? Experiments with Argument Structure.
Trans. Assoc. Comput. Linguistics, 2023

GPQA: A Graduate-Level Google-Proof Q&A Benchmark.
CoRR, 2023

Debate Helps Supervise Unreliable Experts.
CoRR, 2023

The Impact of Depth and Width on Transformer Language Model Generalization.
CoRR, 2023

(QA)²: Question Answering with Questionable Assumptions.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
(QA)<sup>2</sup>: Question Answering with Questionable Assumptions.
CoRR, 2022

Do Language Models Learn Position-Role Mappings?
CoRR, 2022

2021
Transformers Generalize Linearly.
CoRR, 2021

2020
Sequence-to-Sequence Networks Learn the Meaning of Reflexive Anaphora.
CoRR, 2020


  Loading...