Yonggan Fu

Orcid: 0000-0002-7483-2921

According to our database1, Yonggan Fu authored at least 55 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Towards Efficient Neuro-Symbolic AI: From Workload Characterization to Hardware Architecture.
CoRR, 2024

MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog Generation.
CoRR, 2024

Omni-Recon: Towards General-Purpose Neural Radiance Fields for Versatile 3D Applications.
CoRR, 2024

Towards Cognitive AI Systems: a Survey and Prospective on Neuro-Symbolic AI.
CoRR, 2024

Towards Cognitive AI Systems: Workload and Characterization of Neuro-Symbolic AI.
Proceedings of the IEEE International Symposium on Performance Analysis of Systems and Software, 2024

Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Omni-Recon: Harnessing Image-Based Rendering for General-Purpose Neural Radiance Fields.
Proceedings of the Computer Vision - ECCV 2024, 2024

2023
SmartDeal: Remodeling Deep Network Weights for Efficient Inference and Training.
IEEE Trans. Neural Networks Learn. Syst., October, 2023

NetDistiller: Empowering Tiny Deep Learning via In Situ Distillation.
IEEE Micro, 2023

EyeCoD: Eye Tracking System Acceleration via FlatCam-Based Algorithm and Hardware Co-Design.
IEEE Micro, 2023

Master-ASR: Achieving Multilingual Scalability and Low-Resource Adaptation in ASR with Modular Learning.
CoRR, 2023

Gen-NeRF: Efficient and Generalizable Neural Radiance Fields via Algorithm-Hardware Co-Design.
Proceedings of the 50th Annual International Symposium on Computer Architecture, 2023

Master-ASR: Achieving Multilingual Scalability and Low-Resource Adaptation in ASR with Modular Learning.
Proceedings of the International Conference on Machine Learning, 2023

NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations.
Proceedings of the International Conference on Machine Learning, 2023

GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models.
Proceedings of the IEEE/ACM International Conference on Computer Aided Design, 2023

NetBooster: Empowering Tiny Deep Learning By Standing on the Shoulders of Deep Giants.
Proceedings of the 60th ACM/IEEE Design Automation Conference, 2023

Robust Tickets Can Transfer Better: Drawing More Transferable Subnetworks in Transfer Learning.
Proceedings of the 60th ACM/IEEE Design Automation Conference, 2023

Hint-Aug: Drawing Hints from Foundation Vision Transformers towards Boosted Few-shot Parameter-Efficient Tuning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Auto-CARD: Efficient and Robust Codec Avatar Driving for Real-time Mobile Telepresence.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
DANCE: DAta-Network Co-optimization for Efficient Segmentation Model Training and Inference.
ACM Trans. Design Autom. Electr. Syst., 2022

RT-RCG: Neural Network and Accelerator Search Towards Effective and Real-time ECG Reconstruction from Intracardiac Electrograms.
ACM J. Emerg. Technol. Comput. Syst., 2022

LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference.
CoRR, 2022

e-G2C: A 0.14-to-8.31 µJ/Inference NN-based Processor with Continuous On-chip Adaptation for Anomaly Detection and ECG Conversion from EGM.
Proceedings of the IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits 2022), 2022

i-FlatCam: A 253 FPS, 91.49 µJ/Frame Ultra-Compact Intelligent Lensless Camera for Real-Time and Efficient Eye Tracking in VR/AR.
Proceedings of the IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits 2022), 2022

Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

EyeCoD: eye tracking system acceleration via flatcam-based algorithm & accelerator co-design.
Proceedings of the ISCA '22: The 49th Annual International Symposium on Computer Architecture, New York, New York, USA, June 18, 2022

ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks.
Proceedings of the International Conference on Machine Learning, 2022

DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks.
Proceedings of the International Conference on Machine Learning, 2022

Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Proceedings of the Tenth International Conference on Learning Representations, 2022

Contrastive quant: quantization makes stronger contrastive learning.
Proceedings of the DAC '22: 59th ACM/IEEE Design Automation Conference, San Francisco, California, USA, July 10, 2022

MIA-Former: Efficient and Robust Vision Transformers via Multi-Grained Input-Adaptation.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

Early-Bird GCNs: Graph-Network Co-optimization towards More Efficient GCN Training and Inference via Drawing Early-Bird Lottery Tickets.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

2021
HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark.
CoRR, 2021

SmartDeal: Re-Modeling Deep Network Weights for Efficient Inference and Training.
CoRR, 2021

Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency.
Proceedings of the MICRO '21: 54th Annual IEEE/ACM International Symposium on Microarchitecture, 2021

DIAN: Differentiable Accelerator-Network Co-Search Towards Maximal DNN Efficiency.
Proceedings of the IEEE/ACM International Symposium on Low Power Electronics and Design, 2021

Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators.
Proceedings of the 38th International Conference on Machine Learning, 2021

Double-Win Quant: Aggressively Winning Robustness of Quantized Deep Neural Networks via Random Precision Training and Inference.
Proceedings of the 38th International Conference on Machine Learning, 2021

HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark.
Proceedings of the 9th International Conference on Learning Representations, 2021

CPT: Efficient Deep Neural Network Training via Cyclic Precision.
Proceedings of the 9th International Conference on Learning Representations, 2021

SACoD: Sensor Algorithm Co-Design Towards Efficient CNN-powered Intelligent PhlatCam.
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, 2021

G-CoS: GNN-Accelerator Co-Search Towards Both Better Accuracy and Efficiency.
Proceedings of the IEEE/ACM International Conference On Computer Aided Design, 2021

O-HAS: Optical Hardware Accelerator Search for Boosting Both Acceleration Performance and Development Speed.
Proceedings of the IEEE/ACM International Conference On Computer Aided Design, 2021

A3C-S: Automated Agent Accelerator Co-Search towards Efficient Deep Reinforcement Learning.
Proceedings of the 58th ACM/IEEE Design Automation Conference, 2021

InstantNet: Automated Generation and Deployment of Instantaneously Switchable-Precision Networks.
Proceedings of the 58th ACM/IEEE Design Automation Conference, 2021

2020
Auto-Agent-Distiller: Towards Efficient Deep Reinforcement Learning Agents via Neural Architecture Search.
CoRR, 2020

DNA: Differentiable Network-Accelerator Co-Search.
CoRR, 2020

FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation.
Proceedings of the 47th ACM/IEEE Annual International Symposium on Computer Architecture, 2020

AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks.
Proceedings of the 37th International Conference on Machine Learning, 2020

Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks.
Proceedings of the 8th International Conference on Learning Representations, 2020

Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

2019
Drawing early-bird tickets: Towards more efficient training of deep networks.
CoRR, 2019

Integrating Facial Images, Speeches and Time for Empathy Prediction.
Proceedings of the 14th IEEE International Conference on Automatic Face & Gesture Recognition, 2019


  Loading...