Jaehoon Heo

Orcid: 0000-0003-1742-4275

According to our database1, Jaehoon Heo authored at least 10 papers between 2021 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

2021
2022
2023
2024
2025
0
1
2
3
4
1
1
1
1
1
2
2
1

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
EXION: Exploiting Inter- and Intra-Iteration Output Sparsity for Diffusion Models.
CoRR, January, 2025

2024
SP-PIM: A Super-Pipelined Processing-In-Memory Accelerator With Local Error Prediction for Area/Energy-Efficient On-Device Learning.
IEEE J. Solid State Circuits, August, 2024

BLESS: Bandwidth and Locality Enhanced SMEM Seeding Acceleration for DNA Sequencing.
Proceedings of the 51st ACM/IEEE Annual International Symposium on Computer Architecture, 2024

A 38.5TOPS/W Point Cloud Neural Network Processor with Virtual Pillar and Quadtree-based Workload Management for Real-Time Outdoor BEV Detection.
Proceedings of the IEEE Custom Integrated Circuits Conference, 2024

2023
T-PIM: An Energy-Efficient Processing-in-Memory Accelerator for End-to-End On-Device Training.
IEEE J. Solid State Circuits, March, 2023

SP-PIM: A 22.41TFLOPS/W, 8.81Epochs/Sec Super-Pipelined Processing-In-Memory Accelerator with Local Error Prediction for On-Device Learning.
Proceedings of the 2023 IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits), 2023

PRIMO: A Full-Stack Processing-in-DRAM Emulation Framework for Machine Learning Workloads.
Proceedings of the IEEE/ACM International Conference on Computer Aided Design, 2023

2022
Design of Processing-in-Memory With Triple Computational Path and Sparsity Handling for Energy-Efficient DNN Training.
IEEE J. Emerg. Sel. Topics Circuits Syst., 2022

T-PIM: A 2.21-to-161.08TOPS/W Processing-In-Memory Accelerator for End-to-End On-Device Training.
Proceedings of the IEEE Custom Integrated Circuits Conference, 2022

2021
Z-PIM: A Sparsity-Aware Processing-in-Memory Architecture With Fully Variable Weight Bit-Precision for Energy-Efficient Deep Neural Networks.
IEEE J. Solid State Circuits, 2021


  Loading...