Hsinyu Tsai

According to our database1, Hsinyu Tsai authored at least 21 papers between 2017 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Pipeline Gradient-based Model Training on Analog In-memory Accelerators.
CoRR, 2024

2023
A Heterogeneous and Programmable Compute-In-Memory Accelerator Architecture for Analog-AI Using Dense 2-D Mesh.
IEEE Trans. Very Large Scale Integr. Syst., 2023

An analog-AI chip for energy-efficient speech recognition and transcription.
Nat., 2023

Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference.
CoRR, 2023

Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators.
CoRR, 2023

Phase Change Memory-based Hardware Accelerators for Deep Neural Networks (invited).
Proceedings of the 2023 IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits), 2023

Architectures and Circuits for Analog-memory-based Hardware Accelerators for Deep Neural Networks (Invited).
Proceedings of the IEEE International Symposium on Circuits and Systems, 2023

Impact of Phase-Change Memory Drift on Energy Efficiency and Accuracy of Analog Compute-in-Memory Deep Learning Inference (Invited).
Proceedings of the IEEE International Reliability Physics Symposium, 2023

AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing.
Proceedings of the IEEE International Conference on Edge Computing and Communications, 2023

2022
Analog-memory-based 14nm Hardware Accelerator for Dense Deep Neural Networks including Transformers.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2022

2021
Toward Software-Equivalent Accuracy on Transformer-Based Deep Neural Networks With Analog Memory Devices.
Frontiers Comput. Neurosci., 2021

Circuit Techniques for Efficient Acceleration of Deep Neural Network Inference with Analog-AI (Invited).
Proceedings of the IEEE International Symposium on Circuits and Systems, 2021


2020
Optimization of Analog Accelerators for Deep Neural Networks Inference.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2020

Neuromorphic Computing with Phase Change, Device Reliability, and Variability Challenges.
Proceedings of the 2020 IEEE International Reliability Physics Symposium, 2020

Accelerating Deep Neural Networks with Analog Memory Devices.
Proceedings of the 2nd IEEE International Conference on Artificial Intelligence Circuits and Systems, 2020

2019
AI hardware acceleration with analog memory: Microarchitectures for low energy at high speed.
IBM J. Res. Dev., 2019

Analog-to-Digital Conversion With Reconfigurable Function Mapping for Neural Networks Activation Function Acceleration.
IEEE J. Emerg. Sel. Topics Circuits Syst., 2019

2018
Equivalent-accuracy accelerated neural-network training using analogue memory.
Nat., 2018

2017
Neuromorphic devices and architectures for next-generation cognitive computing.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2017

Improved Deep Neural Network Hardware-Accelerators Based on Non-Volatile-Memory: The Local Gains Technique.
Proceedings of the IEEE International Conference on Rebooting Computing, 2017


  Loading...