Jinyu Bai

Orcid: 0000-0001-9369-0327

According to our database1, Jinyu Bai authored at least 20 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
CIM²PQ: An Arraywise and Hardware-Friendly Mixed Precision Quantization Method for Analog Computing-In-Memory.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., July, 2024

CiTST-AdderNets: Computing in Toggle Spin Torques MRAM for Energy-Efficient AdderNets.
IEEE Trans. Circuits Syst. I Regul. Pap., March, 2024

CIMQ: A Hardware-Efficient Quantization Framework for Computing-In-Memory-Based Neural Network Accelerators.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., January, 2024

MixMixQ: Quantization with Mixed Bit-Sparsity and Mixed Bit-Width for CIM Accelerators.
Proceedings of the Great Lakes Symposium on VLSI 2024, 2024

Series-Parallel Hybrid SOT-MRAM Computing-in-Memory Macro with Multi-Method Modulation for High Area and Energy Efficiency.
Proceedings of the 61st ACM/IEEE Design Automation Conference, 2024

2023
Partial Sum Quantization for Computing-In-Memory-Based Neural Network Accelerator.
IEEE Trans. Circuits Syst. II Express Briefs, August, 2023

ES-MPQ: Evolutionary Search Enabled Mixed Precision Quantization Framework for Computing-in-Memory.
Proceedings of the 12th Non-Volatile Memory Systems and Applications Symposium, 2023

Exploring Bit-Level Sparsity for Partial Sum Quantization in Computing-In-Memory Accelerator.
Proceedings of the 12th Non-Volatile Memory Systems and Applications Symposium, 2023

Hierarchical Non-Structured Pruning for Computing-In-Memory Accelerators with Reduced ADC Resolution Requirement.
Proceedings of the Design, Automation & Test in Europe Conference & Exhibition, 2023

Searching Tiny Neural Networks for Deployment on Embedded FPGA.
Proceedings of the 5th IEEE International Conference on Artificial Intelligence Circuits and Systems, 2023

2022
SpinCIM: spin orbit torque memory for ternary neural networks based on the computing-in-memory architecture.
CCF Trans. High Perform. Comput., December, 2022

HD-CIM: Hybrid-Device Computing-In-Memory Structure Based on MRAM and SRAM to Reduce Weight Loading Energy of Neural Networks.
IEEE Trans. Circuits Syst. I Regul. Pap., 2022

Searching for Energy-Efficient Hybrid Adder-Convolution Neural Networks.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022

2021
HSC: A Hybrid Spin/CMOS Logic Based In-Memory Engine with Area-Efficient Mapping Strategy.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2021

A 40nm 33.6Tops/W 8T-SRAM Computing-in-Memory Macro with DAC-less Spike-Pulse-Truncation Input and ADC-less Charge-Reservoir-Integrate-Counter Output.
Proceedings of the 2021 IEEE International Conference on Integrated Circuits, 2021

SpinLiM: Spin Orbit Torque Memory for Ternary Neural Networks Based on the Logic-in-Memory Architecture.
Proceedings of the Design, Automation & Test in Europe Conference & Exhibition, 2021

Tiny neural network search and implementation for embedded FPGA: a software-hardware co-design approach.
Proceedings of the IEEE Asian Solid-State Circuits Conference, 2021

2019
SR-WTA: Skyrmion Racing Winner-Takes-All Module for Spiking Neural Computing.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2019

Magnetic Skyrmion-Based Neural Recording System Design for Brain Machine Interface.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2019

2018
Emerging Neuromorphic Computing Paradigms Exploring Magnetic Skyrmions.
Proceedings of the 2018 IEEE Computer Society Annual Symposium on VLSI, 2018


  Loading...