Weilin Zhao

Orcid: 0000-0001-8016-1952

According to our database1, Weilin Zhao authored at least 29 papers between 2015 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Ultra-Broadband Ultraviolet-Visible Light-Short Wavelength Infrared InGaAs Focal Plane Arrays via n-InP Contact Layer Removal.
Sensors, March, 2024

Enabling Real-Time Conversations with Minimal Training Costs.
CoRR, 2024

Configurable Foundation Models: Building LLMs from a Modular Perspective.
CoRR, 2024

MiniCPM-V: A GPT-4V Level MLLM on Your Phone.
CoRR, 2024

Seq1F1B: Efficient Sequence-Level Pipeline Parallelism for Large Language Model Training.
CoRR, 2024

MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies.
CoRR, 2024

BurstAttention: An Efficient Distributed Attention Framework for Extremely Long Sequences.
CoRR, 2024

Mastering Text, Code and Math Simultaneously via Fusing Highly Specialized Language Models.
CoRR, 2024

Ouroboros: Speculative Decoding with Large Model Enhanced Drafting.
CoRR, 2024

Predicting Emergent Abilities with Infinite Resolution Evaluation.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Ouroboros: Generating Longer Drafts Phrase by Phrase for Faster Speculative Decoding.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

Beyond the Turn-Based Game: Enabling Real-Time Conversations with Duplex Models.
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024

2023
Parameter-efficient fine-tuning of large-scale pre-trained language models.
Nat. Mac. Intell., March, 2023

Unlock Predictable Scaling from Emergent Abilities.
CoRR, 2023

CPET: Effective Parameter-Efficient Tuning for Compressed Large Language Models.
CoRR, 2023

Tool Learning with Foundation Models.
CoRR, 2023

H3T: Efficient Integration of Memory Optimization and Parallelism for Large-scale Transformer Training.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

OpenDelta: A Plug-and-play Library for Parameter-efficient Adaptation of Pre-trained Models.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2023

2022
PTR: Prompt Tuning with Rules for Text Classification.
AI Open, January, 2022

Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models.
CoRR, 2022

Moderate-fitting as a Natural Backdoor Defender for Pre-trained Language Models.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

BMCook: A Task-agnostic Compression Toolkit for Big Models.
Proceedings of the The 2022 Conference on Empirical Methods in Natural Language Processing, 2022

BMInf: An Efficient Toolkit for Big Model Inference and Tuning.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, 2022

OpenPrompt: An Open-source Framework for Prompt-learning.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, 2022

2019
Half-Duplex Two-Way AF Relaying Network with Energy Harvesting.
Proceedings of the Wireless and Satellite Systems, 2019

Energy Efficiency Optimization in OFDM based Two-Way DF Relaying Networks with Energy Harvesting.
Proceedings of the 15th International Wireless Communications & Mobile Computing Conference, 2019

2018
OFDM Based SWIPT for Two-Way AF Relaying Network.
IEEE Access, 2018

OFDM Based SWIPT in a Two-Way Relaying Network.
Proceedings of the Machine Learning and Intelligent Communications, 2018

2015
Service innovation development through china's new urbanization: An economic policy perspective.
Proceedings of the 2015 IEEE International Conference on Industrial Engineering and Engineering Management, 2015


  Loading...