Marzieh S. Tahaei

According to our database1, Marzieh S. Tahaei authored at least 17 papers between 2012 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of five.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Efficient Citer: Tuning Large Language Models for Enhanced Answer Quality and Verification.
Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2024, 2024

QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: EMNLP 2024, 2024

Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference.
Proceedings of the Findings of the Association for Computational Linguistics: EACL 2024, 2024

2023
Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT).
CoRR, 2023

SortedNet, a Place for Every Network and Every Network in its Place: Towards a Generalized Solution for Training Many-in-One Neural Networks.
CoRR, 2023

On the Transferability of Whisper-based Representations for "In-the-Wild" Cross-Task Downstream Speech Applications.
CoRR, 2023

Towards Fine-tuning Pre-trained Language Models with Integer Forward and Backward Propagation.
Proceedings of the Findings of the Association for Computational Linguistics: EACL 2023, 2023

Towards Low-Cost Learning-based Camera ISP via Unrolled Optimization.
Proceedings of the 20th Conference on Robots and Vision, 2023

2022
KronA: Parameter Efficient Tuning with Kronecker Adapter.
CoRR, 2022

SeKron: A Decomposition Method Supporting Many Factorization Structures.
CoRR, 2022

Integer Fine-tuning of Transformer-based Models.
CoRR, 2022

Is Integer Arithmetic Enough for Deep Learning Training?
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

KroneckerBERT: Significant Compression of Pre-trained Language Models Through Kronecker Decomposition and Knowledge Distillation.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022

Kronecker Decomposition for GPT Compression.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2022

Convolutional Neural Network Compression through Generalized Kronecker Product Decomposition.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

2021
KroneckerBERT: Learning Kronecker Decomposition for Pre-trained Language Models via Knowledge Distillation.
CoRR, 2021

2012
Nonlinear Unsupervised Feature Learning: How Local Similarities Lead to Global Coding.
Proceedings of the 12th IEEE International Conference on Data Mining Workshops, 2012


  Loading...