Hongcheng Gao

According to our database1, Hongcheng Gao authored at least 14 papers between 2022 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?
CoRR, 2024

AdaMoE: Token-Adaptive Routing with Null Experts for Mixture-of-Experts Language Models.
CoRR, 2024

Adaptive Token Biaser: Knowledge Editing via Biasing Key Entities.
CoRR, 2024

Universal Prompt Optimizer for Safe Text-to-Image Generation.
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), 2024

Emu: Generative Pretraining in Multimodality.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Efficient Detection of LLM-generated Texts with a Bayesian Surrogate Model.
Proceedings of the Findings of the Association for Computational Linguistics, 2024

2023
Generative Pretraining in Multimodality.
CoRR, 2023

Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks.
CoRR, 2023

Revisiting Out-of-distribution Robustness in NLP: Benchmark, Analysis, and LLMs Evaluations.
CoRR, 2023

Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evaluations.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

2022
Exploring the Universal Vulnerability of Prompt-based Learning Paradigm.
Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2022, 2022

Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022


  Loading...