Jinshan Yue

Orcid: 0000-0001-8234-7400

According to our database1, Jinshan Yue authored at least 65 papers between 2016 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
A Multichiplet Computing-in-Memory Architecture Exploration Framework Based on Various CIM Devices.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., December, 2024

Hardware Implementation of Next Generation Reservoir Computing with RRAM-Based Hybrid Digital-Analog System.
Adv. Intell. Syst., October, 2024

A 28-nm Energy-Efficient Sparse Neural Network Processor for Point Cloud Applications Using Block-Wise Online Neighbor Searching.
IEEE J. Solid State Circuits, September, 2024

A 28-nm Floating-Point Computing-in-Memory Processor Using Intensive-CIM Sparse-Digital Architecture.
IEEE J. Solid State Circuits, August, 2024

Fully Binarized Graph Convolutional Network Accelerator Based on In-Memory Computing with Resistive Random-Access Memory.
Adv. Intell. Syst., July, 2024

2T2R RRAM-Based In-Memory Hyperdimensional Computing Encoder for Spatio-Temporal Signal Processing.
IEEE Trans. Circuits Syst. II Express Briefs, May, 2024

An Energy-Efficient Computing-in-Memory NN Processor With Set-Associate Blockwise Sparsity and Ping-Pong Weight Update.
IEEE J. Solid State Circuits, May, 2024

Write-Verify-Free MLC RRAM Using Nonbinary Encoding for AI Weight Storage at the Edge.
IEEE Trans. Very Large Scale Integr. Syst., February, 2024

LSAC: A Low-Power Adder Tree for Digital Computing-in-Memory by Sparsity and Approximate Circuits Co-Design.
IEEE Trans. Circuits Syst. II Express Briefs, February, 2024

A 28-nm Computing-in-Memory-Based Super-Resolution Accelerator Incorporating Macro-Level Pipeline and Texture/Algebraic Sparsity.
IEEE Trans. Circuits Syst. I Regul. Pap., February, 2024

A 41.7TOPS/W@INT8 Computing-in-Memory Processor with Zig-Zag Backbone-Systolic CIM and Block/Self-Gating CAM for NN/Recommendation Applications.
Proceedings of the IEEE Symposium on VLSI Technology and Circuits 2024, 2024

34.9 A Flash-SRAM-ADC-Fused Plastic Computing-in-Memory Macro for Learning in Neural Networks in a Standard 14nm FinFET Process.
Proceedings of the IEEE International Solid-State Circuits Conference, 2024

34.7 A 28nm 2.4Mb/mm<sup>2</sup> 6.9 - 16.3TOPS/mm<sup>2</sup> eDRAM-LUT-Based Digital-Computing-in-Memory Macro with In-Memory Encoding and Refreshing.
Proceedings of the IEEE International Solid-State Circuits Conference, 2024

A 2T P-Channel Logic Flash Cell for Reconfigurable Interconnection in Chiplet-Based Computing-In-Memory Accelerators.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2024

IG-CRM: Area/Energy-Efficient IGZO-Based Circuits and Architecture Design for Reconfigurable CIM/CAM Applications.
Proceedings of the 61st ACM/IEEE Design Automation Conference, 2024

A 28nm 8928Kb/mm<sup>2</sup>-Weight-Density Hybrid SRAM/ROM Compute-in-Memory Architecture Reducing >95% Weight Loading from DRAM.
Proceedings of the IEEE Custom Integrated Circuits Conference, 2024

2023
A 40-nm SONOS Digital CIM Using Simplified LUT Multiplier and Continuous Sample-Hold Sense Amplifier for AI Edge Inference.
IEEE Trans. Very Large Scale Integr. Syst., December, 2023

P<sup>3</sup> ViT: A CIM-Based High-Utilization Architecture With Dynamic Pruning and Two-Way Ping-Pong Macro for Vision Transformer.
IEEE Trans. Circuits Syst. I Regul. Pap., December, 2023

A 28-nm RRAM Computing-in-Memory Macro Using Weighted Hybrid 2T1R Cell Array and Reference Subtracting Sense Amplifier for AI Edge Inference.
IEEE J. Solid State Circuits, October, 2023

Recent progress in InGaZnO FETs for high-density 2T0C DRAM applications.
Sci. China Inf. Sci., October, 2023

A Scalable Small-Footprint Time-Space-Pipelined Architecture for Reservoir Computing.
IEEE Trans. Circuits Syst. II Express Briefs, August, 2023

SAMBA: Single-ADC Multi-Bit Accumulation Compute-in-Memory Using Nonlinearity- Compensated Fully Parallel Analog Adder Tree.
IEEE Trans. Circuits Syst. I Regul. Pap., July, 2023

A Weight-Reload-Eliminated Compute-in-Memory Accelerator for 60 fps 4K Super-Resolution.
IEEE Trans. Circuits Syst. II Express Briefs, March, 2023

An RRAM-Based Digital Computing-in-Memory Macro With Dynamic Voltage Sense Amplifier and Sparse-Aware Approximate Adder Tree.
IEEE Trans. Circuits Syst. II Express Briefs, February, 2023

A 5.6-89.9TOPS/W Heterogeneous Computing-in-Memory SoC with High-Utilization Producer-Consumer Architecture and High-Frequency Read-Free CIM Macro.
Proceedings of the 2023 IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits), 2023

A 28nm 16.9-300TOPS/W Computing-in-Memory Processor Supporting Floating-Point NN Inference/Training with Intensive-CIM Sparse-Digital Architecture.
Proceedings of the IEEE International Solid- State Circuits Conference, 2023

A 28nm 2D/3D Unified Sparse Convolution Accelerator with Block-Wise Neighbor Searcher for Large-Scaled Voxel-Based Point Cloud Network.
Proceedings of the IEEE International Solid- State Circuits Conference, 2023

A 28nm 38-to-102-TOPS/W 8b Multiply-Less Approximate Digital SRAM Compute-In-Memory Macro for Neural-Network Inference.
Proceedings of the IEEE International Solid- State Circuits Conference, 2023

A User-Friendly Fast and Accurate Simulation Framework for Non-Ideal Factors in Computing-in-Memory Architecture.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2023

A 28nm 1.07TFLOPS/mm<sup>2</sup> Dynamic-Precision Training Processor with Online Dynamic Execution and Multi- Level-Aligned Block-FP Processing.
Proceedings of the IEEE Custom Integrated Circuits Conference, 2023

A Demonstration Platform for Large-Scaled Point Cloud Network Based on 28nm 2D/3D Unified Sparse Convolution Accelerator.
Proceedings of the 5th IEEE International Conference on Artificial Intelligence Circuits and Systems, 2023

2022
Bit-Aware Fault-Tolerant Hybrid Retraining and Remapping Schemes for RRAM-Based Computing-in-Memory Systems.
IEEE Trans. Circuits Syst. II Express Briefs, 2022

Accuracy Optimization With the Framework of Non-Volatile Computing-In-Memory Systems.
IEEE Trans. Circuits Syst. I Regul. Pap., 2022

PACA: A Pattern Pruning Algorithm and Channel-Fused High PE Utilization Accelerator for CNNs.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., 2022

STICKER-IM: A 65 nm Computing-in-Memory NN Processor Using Block-Wise Sparsity Optimization and Inter/Intra-Macro Data Reuse.
IEEE J. Solid State Circuits, 2022

A 65-nm Energy-Efficient Interframe Data Reuse Neural Network Accelerator for Video Applications.
IEEE J. Solid State Circuits, 2022

A 65nm 8b-Activation 8b-Weight SRAM-Based Charge-Domain Computing-in-Memory Macro Using A Fully-Parallel Analog Adder Network and A Single-ADC Interface.
CoRR, 2022

C-RRAM: A Fully Input Parallel Charge-Domain RRAM-based Computing-in-Memory Design with High Tolerance for RRAM Variations.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2022

Toward Low-Bit Neural Network Training Accelerator by Dynamic Group Accumulation.
Proceedings of the 27th Asia and South Pacific Design Automation Conference, 2022

Sparsity-Aware Non-Volatile Computing-In-Memory Macro with Analog Switch Array and Low-Resolution Current-Mode ADC.
Proceedings of the 27th Asia and South Pacific Design Automation Conference, 2022

RRAM Computing-in-Memory Using Transient Charge Transferring for Low-Power and Small-Latency AI Edge Inference.
Proceedings of the IEEE Asia Pacific Conference on Circuit and Systems, 2022

2021
STICKER-T: An Energy-Efficient Neural Network Processor Using Block-Circulant Algorithm and Unified Frequency-Domain Acceleration.
IEEE J. Solid State Circuits, 2021

High Area/Energy Efficiency RRAM CNN Accelerator with Pattern-Pruning-Based Weight Mapping Scheme.
Proceedings of the 10th IEEE Non-Volatile Memory Systems and Applications Symposium, 2021

A 2.75-to-75.9TOPS/W Computing-in-Memory NN Processor Supporting Set-Associate Block-Wise Zero Skipping and Ping-Pong CIM with Simultaneous Computation and Weight Updating.
Proceedings of the IEEE International Solid-State Circuits Conference, 2021

Challenges and Opportunities of Energy-Efficient CIM SoC Design for Edge AI Devices.
Proceedings of the 18th International SoC Design Conference, 2021

A Sort-Less FPGA-Based Non-Maximum Suppression Accelerator using Multi-Thread Computing and Binary Max Engine for Object Detection.
Proceedings of the IEEE Asian Solid-State Circuits Conference, 2021

A Non-Volatile Computing-In-Memory Framework With Margin Enhancement Based CSA and Offset Reduction Based ADC.
Proceedings of the ASPDAC '21: 26th Asia and South Pacific Design Automation Conference, 2021

Block-Circulant Neural Network Accelerator Featuring Fine-Grained Frequency-Domain Quantization and Reconfigurable FFT Modules.
Proceedings of the ASPDAC '21: 26th Asia and South Pacific Design Automation Conference, 2021

2020
STICKER: An Energy-Efficient Multi-Sparsity Compatible Accelerator for Convolutional Neural Networks in 65-nm CMOS.
IEEE J. Solid State Circuits, 2020

High Area/Energy Efficiency RRAM CNN Accelerator with Kernel-Reordering Weight Mapping Scheme Based on Pattern Pruning.
CoRR, 2020

14.3 A 65nm Computing-in-Memory-Based CNN Processor with 2.9-to-35.8TOPS/W System Energy Efficiency Using Dynamic-Sparsity Performance-Scaling Architecture and Energy-Efficient Inter/Intra-Macro Data Reuse.
Proceedings of the 2020 IEEE International Solid- State Circuits Conference, 2020

14.2 A 65nm 24.7µJ/Frame 12.3mW Activation-Similarity-Aware Convolutional Neural Network Video Processor Using Hybrid Precision, Inter-Frame Data Reuse and Mixed-Bit-Width Difference-Frame Data Codec.
Proceedings of the 2020 IEEE International Solid- State Circuits Conference, 2020

RL Based Network Accelerator Compiler for Joint Compression Hyper-Parameter Search.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2020

High PE Utilization CNN Accelerator with Channel Fusion Supporting Pattern-Compressed Sparse Neural Networks.
Proceedings of the 57th ACM/IEEE Design Automation Conference, 2020

2019
A 3.77TOPS/W Convolutional Neural Network Processor With Priority-Driven Kernel Optimization.
IEEE Trans. Circuits Syst. II Express Briefs, 2019

A 65nm 0.39-to-140.3TOPS/W 1-to-12b Unified Neural Network Processor Using Block-Circulant-Enabled Transpose-Domain Acceleration with 8.1 × Higher TOPS/mm<sup>2</sup>and 6T HBST-TRAM-Based 2D Data-Reuse Architecture.
Proceedings of the IEEE International Solid- State Circuits Conference, 2019

A Sparse-Adaptive CNN Processor with Area/Performance balanced N-Way Set-Associate PE Arrays Assisted by a Collision-Aware Scheduler.
Proceedings of the IEEE Asian Solid-State Circuits Conference, 2019

AERIS: area/energy-efficient 1T2R ReRAM based processing-in-memory neural network system-on-a-chip.
Proceedings of the 24th Asia and South Pacific Design Automation Conference, 2019

Accelerating CNN-RNN Based Machine Health Monitoring on FPGA.
Proceedings of the IEEE International Conference on Artificial Intelligence Circuits and Systems, 2019

2018
PATH: Performance-Aware Task Scheduling for Energy-Harvesting Nonvolatile Processors.
IEEE Trans. Very Large Scale Integr. Syst., 2018

Sticker: A 0.41-62.1 TOPS/W 8Bit Neural Network Processor with Multi-Sparsity Compatible Convolution Arrays and Online Tuning Acceleration for Fully Connected Layers.
Proceedings of the 2018 IEEE Symposium on VLSI Circuits, 2018

2017
Data Backup Optimization for Nonvolatile SRAM in Energy Harvesting Sensor Nodes.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., 2017

CORAL: Coarse-grained reconfigurable architecture for Convolutional Neural Networks.
Proceedings of the 2017 IEEE/ACM International Symposium on Low Power Electronics and Design, 2017

CNN-based pattern recognition on nonvolatile IoT platform for smart ultraviolet monitoring: (Invited paper).
Proceedings of the 2017 IEEE/ACM International Conference on Computer-Aided Design, 2017

2016
Performance-aware task scheduling for energy harvesting nonvolatile processors considering power switching overhead.
Proceedings of the 53rd Annual Design Automation Conference, 2016


  Loading...