Sung-En Chang

According to our database1, Sung-En Chang authored at least 16 papers between 2017 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Digital Avatars: Framework Development and Their Evaluation.
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, 2024

SDA: Low-Bit Stable Diffusion Acceleration on Edge FPGAs.
Proceedings of the 34th International Conference on Field-Programmable Logic and Applications, 2024

SuperFlow: A Fully-Customized RTL-to-GDS Design Automation Flow for Adiabatic Quantum- Flux - Parametron Superconducting Circuits.
Proceedings of the Design, Automation & Test in Europe Conference & Exhibition, 2024

2023
ESRU: Extremely Low-Bit and Hardware-Efficient Stochastic Rounding Unit Design for Low-Bit DNN Training.
Proceedings of the Design, Automation & Test in Europe Conference & Exhibition, 2023

2022
FILM-QNN: Efficient FPGA Acceleration of Deep Neural Networks with Intra-Layer, Mixed-Precision Quantization.
Proceedings of the FPGA '22: The 2022 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, Virtual Event, USA, 27 February 2022, 2022

You Already Have It: A Generator-Free Low-Precision DNN Training Framework Using Stochastic Rounding.
Proceedings of the Computer Vision - ECCV 2022, 2022

Hardware-efficient stochastic rounding unit design for DNN training: late breaking results.
Proceedings of the DAC '22: 59th ACM/IEEE Design Automation Conference, San Francisco, California, USA, July 10, 2022

Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022

2021
ILMPQ : An Intra-Layer Multi-Precision Deep Neural Network Quantization framework for FPGA.
CoRR, 2021

Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm.
CoRR, 2021

RMSMP: A Novel Deep Neural Network Quantization Framework with Row-wise Mixed Schemes and Multiple Precisions.
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, 2021

Mix and Match: A Novel FPGA-Centric Deep Neural Network Quantization Framework.
Proceedings of the IEEE International Symposium on High-Performance Computer Architecture, 2021

2020
MSP: An FPGA-Specific Mixed-Scheme, Multi-Precision Deep Neural Network Quantization Framework.
CoRR, 2020

2018
Learning Tensor Latent Features.
CoRR, 2018

MixLasso: Generalized Mixed Regression via Convex Atomic-Norm Regularization.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

2017
Latent Feature Lasso.
Proceedings of the 34th International Conference on Machine Learning, 2017


  Loading...