Chaofan Tao

Orcid: 0000-0002-6093-0854

According to our database1, Chaofan Tao authored at least 29 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., May, 2024

FAT: Frequency-Aware Transformation for Bridging Full-Precision and Low-Precision Deep Representations.
IEEE Trans. Neural Networks Learn. Syst., February, 2024

Source-free domain adaptation with unrestricted source hypothesis.
Pattern Recognit., 2024

UNComp: Uncertainty-Aware Long-Context Compressor for Efficient Large Language Model Inference.
CoRR, 2024

NAVERO: Unlocking Fine-Grained Semantics for Video-Language Compositionality.
CoRR, 2024

Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies.
CoRR, 2024

D2O: Dynamic Discriminative Operations for Efficient Generative Inference of Large Language Models.
CoRR, 2024

Rethinking Kullback-Leibler Divergence in Knowledge Distillation for Large Language Models.
CoRR, 2024

Electrocardiogram Instruction Tuning for Report Generation.
CoRR, 2024

CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

RoboCodeX: Multimodal Code Generation for Robotic Behavior Synthesis.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

2023
A Spectral Perspective towards Understanding and Improving Adversarial Robustness.
CoRR, 2023

DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference.
CoRR, 2023

UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.
Proceedings of the International Conference on Machine Learning, 2023

Structured Pruning for Efficient Generative Pre-trained Language Models.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

2022
Frequency Regularization for Improving Adversarial Robustness.
CoRR, 2022

What Do Adversarially trained Neural Networks Focus: A Fourier Domain-based Study.
CoRR, 2022

ODG-Q: Robust Quantization via Online Domain Generalization.
Proceedings of the 26th International Conference on Pattern Recognition, 2022

LiteVL: Efficient Video-Language Learning with Enhanced Spatial-Temporal Modeling.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

Compression of Generative Pre-trained Language Models via Quantization.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022

2021
A case-based interpretable deep learning model for classification of mass lesions in digital mammography.
Nat. Mach. Intell., 2021

Interpretable Mammographic Image Classification using Cased-Based Reasoning and Deep Learning.
CoRR, 2021

IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography.
CoRR, 2021

FAT: Learning Low-Bitwidth Parametric Representation via Frequency-Aware Transformation.
CoRR, 2021

LiteGT: Efficient and Lightweight Graph Transformers.
Proceedings of the CIKM '21: The 30th ACM International Conference on Information and Knowledge Management, Virtual Event, Queensland, Australia, November 1, 2021

BATMANN: A Binarized-All-Through Memory-Augmented Neural Network for Efficient In-Memory Computing.
Proceedings of the 14th IEEE International Conference on ASIC, 2021

2020
Dynamic and Static Context-Aware LSTM for Multi-agent Motion Prediction.
Proceedings of the Computer Vision - ECCV 2020, 2020

2019
MiniMax Entropy Network: Learning Category-Invariant Features for Domain Adaptation.
CoRR, 2019

MR-NET: Exploiting Mutual Relation for Visual Relationship Detection.
Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, 2019


  Loading...