Anej Svete

According to our database1, Anej Svete authored at least 21 papers between 2020 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Can Transformers Learn <i>n</i>-gram Language Models?
CoRR, 2024

A Fundamental Trade-off in Aligned Language Models and its Relation to Sampling Adaptors.
CoRR, 2024

What Languages are Easy to Language-Model? A Perspective from Learning Probabilistic Regular Languages.
CoRR, 2024

On Affine Homotopy between Language Encoders.
CoRR, 2024

Transformers Can Represent <i>n</i>-gram Language Models.
CoRR, 2024

A Theoretical Result on the Inductive Bias of RNN Language Models.
CoRR, 2024

Lower Bounds on the Expressivity of Recurrent Neural Language Models.
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), 2024

Transformers Can Represent n-gram Language Models.
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), 2024

The Role of n-gram Smoothing in the Age of Neural Networks.
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), 2024

A Probability-Quality Trade-off in Aligned Language Models and its Relation to Sampling Adaptors.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

Can Transformers Learn n-gram Language Models?
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

An L* Algorithm for Deterministic Weighted Regular Languages.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

On Efficiently Representing Regular Languages as RNNs.
Proceedings of the Findings of the Association for Computational Linguistics, 2024

On the Representational Capacity of Neural Language Models with Chain-of-Thought Reasoning.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

What Languages are Easy to Language-Model? A Perspective from Learning Probabilistic Regular Languages.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
Formal Aspects of Language Modeling.
CoRR, 2023

A Geometric Notion of Causal Probing.
CoRR, 2023

Recurrent Neural Language Models as Probabilistic Finite-state Automata.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

On the Representational Capacity of Recurrent Neural Language Models.
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 2023

2022
Algorithms for Acyclic Weighted Finite-State Automata with Failure Arcs.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

2020
It is not just about the Melody: How Europe Votes for its Favorite Songs.
CoRR, 2020


  Loading...