Huihong Shi

Orcid: 0000-0002-7845-0154

According to our database1, Huihong Shi authored at least 21 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
P<sup>2</sup>-ViT: Power-of-Two Post-Training Quantization and Acceleration for Fully Quantized Vision Transformer.
IEEE Trans. Very Large Scale Integr. Syst., September, 2024

NASA-F: FPGA-Oriented Search and Acceleration for Multiplication-Reduced Hybrid Networks.
IEEE Trans. Circuits Syst. I Regul. Pap., January, 2024

NASH: Neural Architecture and Accelerator Search for Multiplication-Reduced Hybrid Models.
CoRR, 2024

ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization.
CoRR, 2024

Trio-ViT: Post-Training Quantization and Acceleration for Softmax-Free Efficient Vision Transformer.
CoRR, 2024

An FPGA-Based Reconfigurable Accelerator for Convolution-Transformer Hybrid EfficientViT.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2024

Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

A Computationally Efficient Neural Video Compression Accelerator Based on a Sparse CNN-Transformer Hybrid Network.
Proceedings of the Design, Automation & Test in Europe Conference & Exhibition, 2024

2023
Intelligent Typography: Artistic Text Style Transfer for Complex Texture and Structure.
IEEE Trans. Multim., 2023

NASA+: Neural Architecture Search and Acceleration for Multiplication-Reduced Hybrid Networks.
IEEE Trans. Circuits Syst. I Regul. Pap., 2023

S2R: Exploring a Double-Win Transformer-Based Framework for Ideal and Blind Super-Resolution.
CoRR, 2023

ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Instant-3D: Instant Neural Radiance Field Training Towards On-Device AR/VR 3D Reconstruction.
Proceedings of the 50th Annual International Symposium on Computer Architecture, 2023

S$$^2$$R: Exploring a Double-Win Transformer-Based Framework for Ideal and Blind Super-Resolution.
Proceedings of the Artificial Neural Networks and Machine Learning, 2023

ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design.
Proceedings of the IEEE International Symposium on High-Performance Computer Architecture, 2023

ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention.
Proceedings of the IEEE International Symposium on High-Performance Computer Architecture, 2023

2022
Max-Affine Spline Insights Into Deep Network Pruning.
Trans. Mach. Learn. Res., 2022

ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks.
Proceedings of the International Conference on Machine Learning, 2022

NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks.
Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design, 2022

2021
LITNet: A Light-weight Image Transform Net for Image Style Transfer.
Proceedings of the International Joint Conference on Neural Networks, 2021

2019
Passive Source Localization Using Compressive Sensing.
Sensors, 2019


  Loading...