En-Yu Yang

Orcid: 0000-0003-0281-4086

According to our database1, En-Yu Yang authored at least 14 papers between 2018 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024

2023
A 16-nm SoC for Noise-Robust Speech and NLP Edge AI Inference With Bayesian Sound Source Separation and Attention-Based DNNs.
IEEE J. Solid State Circuits, February, 2023

Trireme: Exploration of Hierarchical Multi-level Parallelism for Hardware Acceleration.
ACM Trans. Embed. Comput. Syst., 2023

2022
Trireme: Exploring Hierarchical Multi-Level Parallelism for Domain Specific Hardware Acceleration.
CoRR, 2022

OMU: A Probabilistic 3D Occupancy Mapping Accelerator for Real-time OctoMap at the Edge.
Proceedings of the 2022 Design, Automation & Test in Europe Conference & Exhibition, 2022

2021
EdgeBERT: Sentence-Level Energy Optimizations for Latency-Aware Multi-Task NLP Inference.
Proceedings of the MICRO '21: 54th Annual IEEE/ACM International Symposium on Microarchitecture, 2021

9.8 A 25mm<sup>2</sup> SoC for IoT Devices with 18ms Noise-Robust Speech-to-Text Latency via Bayesian Speech Denoising and Attention-Based Sequence-to-Sequence DNN Speech Recognition in 16nm FinFET.
Proceedings of the IEEE International Solid-State Circuits Conference, 2021

SM6: A 16nm System-on-Chip for Accurate and Noise-Robust Attention-Based NLP Applications : The 33<sup>rd</sup> Hot Chips Symposium - August 22-24, 2021.
Proceedings of the IEEE Hot Chips 33 Symposium, 2021

FlexACC: A Programmable Accelerator with Application-Specific ISA for Flexible Deep Neural Network Inference.
Proceedings of the 32nd IEEE International Conference on Application-specific Systems, 2021

2020
EdgeBERT: Optimizing On-Chip Inference for Multi-Task NLP.
CoRR, 2020

Algorithm-Hardware Co-Design of Adaptive Floating-Point Encodings for Resilient Deep Learning Inference.
Proceedings of the 57th ACM/IEEE Design Automation Conference, 2020

2019
AdaptivFloat: A Floating-point based Data Type for Resilient Deep Learning Inference.
CoRR, 2019

2018
A 65nm 4Kb algorithm-dependent computing-in-memory SRAM unit-macro with 2.3ns and 55.8TOPS/W fully parallel product-sum operation for binary DNN edge processors.
Proceedings of the 2018 IEEE International Solid-State Circuits Conference, 2018

A 65nm 1Mb nonvolatile computing-in-memory ReRAM macro with sub-16ns multiply-and-accumulate for binary DNN AI edge processors.
Proceedings of the 2018 IEEE International Solid-State Circuits Conference, 2018


  Loading...