Shinhaeng Kang

Orcid: 0000-0002-0888-4547

According to our database1, Shinhaeng Kang authored at least 10 papers between 2021 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
The Breakthrough Memory Solutions for Improved Performance on LLM Inference.
IEEE Micro, 2024

MPC-Wrapper: Fully Harnessing the Potential of Samsung Aquabolt-XL HBM2-PIM on FPGAs.
Proceedings of the 32nd IEEE Annual International Symposium on Field-Programmable Custom Computing Machines, 2024

2023
Samsung PIM/PNM for Transfmer Based AI : Energy Efficiency on PIM/PNM Cluster.
Proceedings of the 35th IEEE Hot Chips Symposium, 2023

2022
Near-Memory Processing in Action: Accelerating Personalized Recommendation With AxDIMM.
IEEE Micro, 2022

Aquabolt-XL HBM2-PIM, LPDDR5-PIM With In-Memory Processing, and AXDIMM With Acceleration Buffer.
IEEE Micro, 2022

An FPGA-based RNN-T Inference Accelerator with PIM-HBM.
Proceedings of the FPGA '22: The 2022 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, Virtual Event, USA, 27 February 2022, 2022

An Architecture of Sparse Length Sum Accelerator in AxDIMM.
Proceedings of the 4th IEEE International Conference on Artificial Intelligence Circuits and Systems, 2022

2021
25.4 A 20nm 6GB Function-In-Memory DRAM, Based on HBM2 with a 1.2TFLOPS Programmable Computing Unit Using Bank-Level Parallelism, for Machine Learning Applications.
Proceedings of the IEEE International Solid-State Circuits Conference, 2021

Hardware Architecture and Software Stack for PIM Based on Commercial DRAM Technology : Industrial Product.
Proceedings of the 48th ACM/IEEE Annual International Symposium on Computer Architecture, 2021

Aquabolt-XL: Samsung HBM2-PIM with in-memory processing for ML accelerators and beyond.
Proceedings of the IEEE Hot Chips 33 Symposium, 2021


  Loading...