Yulhwa Kim
Orcid: 0000-0003-3735-821XAffiliations:
- Pohang University of Science and Technology, South Korea
According to our database1,
Yulhwa Kim
authored at least 24 papers
between 2018 and 2024.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
Online presence:
-
on orcid.org
On csauthors.net:
Bibliography
2024
Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for Large Language Models.
CoRR, 2024
L4Q: Parameter Efficient Quantization-Aware Training on Large Language Models via LoRA-wise LSQ.
CoRR, 2024
SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks.
Proceedings of the Forty-first International Conference on Machine Learning, 2024
FIGNA: Integer Unit-Based Accelerator Design for FP-INT GEMM Preserving Numerical Accuracy.
Proceedings of the IEEE International Symposium on High-Performance Computer Architecture, 2024
2023
Leveraging Early-Stage Robustness in Diffusion Models for Efficient and High-Quality Image Synthesis.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
Winning Both the Accuracy of Floating Point Activation and the Simplicity of Integer Arithmetic.
Proceedings of the Eleventh International Conference on Learning Representations, 2023
2022
BitBlade: Energy-Efficient Variable Bit-Precision Hardware Accelerator for Quantized Neural Networks.
IEEE J. Solid State Circuits, 2022
Extreme Partial-Sum Quantization for Analog Computing-In-Memory Neural Network Accelerators.
ACM J. Emerg. Technol. Comput. Syst., 2022
2021
Maximizing Parallel Activation of Word-Lines in MRAM-Based Binary Neural Network Accelerators.
IEEE Access, 2021
Proceedings of the Design, Automation & Test in Europe Conference & Exhibition, 2021
Energy-efficient charge sharing-based 8T2C SRAM in-memory accelerator for binary neural networks in 28nm CMOS.
Proceedings of the IEEE Asian Solid-State Circuits Conference, 2021
Single RRAM Cell-based In-Memory Accelerator Architecture for Binary Neural Networks.
Proceedings of the 3rd IEEE International Conference on Artificial Intelligence Circuits and Systems, 2021
2020
Proceedings of the ISLPED '20: ACM/IEEE International Symposium on Low Power Electronics and Design, 2020
Algorithm/Hardware Co-Design for In-Memory Neural Network Computing with Minimal Peripheral Circuit Overhead.
Proceedings of the 57th ACM/IEEE Design Automation Conference, 2020
A 44.1TOPS/W Precision-Scalable Accelerator for Quantized Neural Networks in 28nm CMOS.
Proceedings of the 2020 IEEE Custom Integrated Circuits Conference, 2020
2019
Monolithically Integrated RRAM- and CMOS-Based In-Memory Computing Optimizations for Efficient Deep Learning.
IEEE Micro, 2019
CoRR, 2019
Proceedings of the 2019 Symposium on VLSI Circuits, Kyoto, Japan, June 9-14, 2019, 2019
Effect of Device Variation on Mapping Binary Neural Network to Memristor Crossbar Array.
Proceedings of the Design, Automation & Test in Europe Conference & Exhibition, 2019
In-memory batch-normalization for resistive memory based binary neural network hardware.
Proceedings of the 24th Asia and South Pacific Design Automation Conference, 2019
2018
CoRR, 2018
Proceedings of the International Symposium on Low Power Electronics and Design, 2018
Input-Splitting of Large Neural Networks for Power-Efficient Accelerator with Resistive Crossbar Memory Array.
Proceedings of the International Symposium on Low Power Electronics and Design, 2018