Jinsu Lee
Orcid: 0000-0002-4335-9166
According to our database1,
Jinsu Lee
authored at least 36 papers
between 2016 and 2024.
Collaborative distances:
Collaborative distances:
Timeline
2016
2017
2018
2019
2020
2021
2022
2023
2024
0
1
2
3
4
5
6
7
2
2
3
1
2
2
1
1
3
4
1
4
4
3
3
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
On csauthors.net:
Bibliography
2024
Proceedings of the SIGGRAPH Asia 2024 Posters, SA 2024, Tokyo, Japan, December 3-6, 2024, 2024
2023
A Mobile 3-D Object Recognition Processor With Deep-Learning-Based Monocular Depth Estimation.
IEEE Micro, 2023
DSPU: An Efficient Deep Learning-Based Dense RGB-D Data Acquisition With Sensor Fusion and 3-D Perception SoC.
IEEE J. Solid State Circuits, 2023
2022
DSPU: A 281.6mW Real-Time Depth Signal Processing Unit for Deep Learning-Based Dense RGB-D Data Acquisition with Depth Fusion and 3D Bounding Box Extraction in Mobile Platforms.
Proceedings of the IEEE International Solid-State Circuits Conference, 2022
DSPU: A 281.6mW Real-Time Deep Learning-Based Dense RGB-D Data Acquisition with Sensor Fusion and 3D Perception System-on-Chip.
Proceedings of the 2022 IEEE Hot Chips 34 Symposium, 2022
A Low-power and Real-time 3D Object Recognition Processor with Dense RGB-D Data Acquisition in Mobile Platforms.
Proceedings of the IEEE Symposium in Low-Power and High-Speed Chips, 2022
2021
Z-PIM: A Sparsity-Aware Processing-in-Memory Architecture With Fully Variable Weight Bit-Precision for Energy-Efficient Deep Neural Networks.
IEEE J. Solid State Circuits, 2021
DF-LNPU: A Pipelined Direct Feedback Alignment-Based Deep Neural Network Learning Processor for Fast Online Learning.
IEEE J. Solid State Circuits, 2021
OmniDRL: A 29.3 TFLOPS/W Deep Reinforcement Learning Processor with Dualmode Weight Compression and On-chip Sparse Weight Transposer.
Proceedings of the 2021 Symposium on VLSI Circuits, Kyoto, Japan, June 13-19, 2021, 2021
A 13.7 TFLOPS/W Floating-point DNN Processor using Heterogeneous Computing Architecture with Exponent-Computing-in-Memory.
Proceedings of the 2021 Symposium on VLSI Circuits, Kyoto, Japan, June 13-19, 2021, 2021
A 3.6 TOPS/W Hybrid FP-FXP Deep Learning Processor with Outlier Compensation for Image-to-Image Application.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2021
An Energy-efficient Floating-Point DNN Processor using Heterogeneous Computing Architecture with Exponent-Computing-in-Memory.
Proceedings of the IEEE Hot Chips 33 Symposium, 2021
2020
The Hardware and Algorithm Co-Design for Energy-Efficient DNN Processor on Edge/Mobile Devices.
IEEE Trans. Circuits Syst., 2020
A Power-Efficient CNN Accelerator With Similar Feature Skipping for Face Recognition in Mobile Devices.
IEEE Trans. Circuits Syst. I Fundam. Theory Appl., 2020
SRNPU: An Energy-Efficient CNN-Based Super-Resolution Processor With Tile-Based Selective Super-Resolution in Mobile Devices.
IEEE J. Emerg. Sel. Topics Circuits Syst., 2020
Z-PIM: An Energy-Efficient Sparsity Aware Processing-In-Memory Architecture with Fully-Variable Weight Precision.
Proceedings of the IEEE Symposium on VLSI Circuits, 2020
2019
A Low-Power Deep Neural Network Online Learning Processor for Real-Time Object Tracking Application.
IEEE Trans. Circuits Syst. I Regul. Pap., 2019
A Full HD 60 fps CNN Super Resolution Processor with Selective Caching based Layer Fusion for Mobile Devices.
Proceedings of the 2019 Symposium on VLSI Circuits, Kyoto, Japan, June 9-14, 2019, 2019
A 1.32 TOPS/W Energy Efficient Deep Neural Network Learning Processor with Direct Feedback Alignment based Heterogeneous Core Architecture.
Proceedings of the 2019 Symposium on VLSI Circuits, Kyoto, Japan, June 9-14, 2019, 2019
LNPU: A 25.3TFLOPS/W Sparse Deep-Neural-Network Learning Processor with Fine-Grained Mixed Precision of FP8-FP16.
Proceedings of the IEEE International Solid- State Circuits Conference, 2019
A 15.2 TOPS/W CNN Accelerator with Similar Feature Skipping for Face Recognition in Mobile Devices.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2019
2018
A 0.55 V 1.1 mW Artificial Intelligence Processor With On-Chip PVT Compensation for Autonomous Mobile Robots.
IEEE Trans. Circuits Syst. I Regul. Pap., 2018
DNPU: An Energy-Efficient Deep-Learning Processor with Heterogeneous Multi-Core Architecture.
IEEE Micro, 2018
A 9.02mW CNN-stereo-based real-time 3D hand-gesture recognition processor for smart mobile devices.
Proceedings of the 2018 IEEE International Solid-State Circuits Conference, 2018
A 141.4 mW Low-Power Online Deep Neural Network Training Processor for Real-time Object Tracking in Mobile Devices.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2018
Proceedings of the International Conference on Information and Communication Technology Convergence, 2018
Proceedings of the International Conference on Information and Communication Technology Convergence, 2018
2017
IEEE Trans. Very Large Scale Integr. Syst., 2017
14.2 DNPU: An 8.1TOPS/W reconfigurable CNN-RNN processor for general-purpose deep neural networks.
Proceedings of the 2017 IEEE International Solid-State Circuits Conference, 2017
Proceedings of the International Conference on Information and Communication Technology Convergence, 2017
An energy-efficient deep learning processor with heterogeneous multi-core architecture for convolutional neural networks and recurrent neural networks.
Proceedings of the 2017 IEEE Symposium in Low-Power and High-Speed Chips, 2017
2016
14.3 A 0.55V 1.1mW artificial-intelligence processor with PVT compensation for micro robots.
Proceedings of the 2016 IEEE International Solid-State Circuits Conference, 2016
Practical Residual Error of Interference Cancellation for Spread MSK with a Pseudo-Noise Preamble.
Proceedings of the 2016 IEEE Global Communications Conference, 2016
A 1.1 mW 32-thread artificial intelligence processor with 3-level transposition table and on-chip PVT compensation for autonomous mobile robots.
Proceedings of the 2016 IEEE Symposium in Low-Power and High-Speed Chips, 2016