Linyan Mei

Orcid: 0000-0001-8649-3923

According to our database1, Linyan Mei authored at least 21 papers between 2019 and 2023.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
TinyVers: A Tiny Versatile System-on-Chip With State-Retentive eMRAM for ML Inference at the Extreme Edge.
IEEE J. Solid State Circuits, 2023

Stream: A Modeling Framework for Fine-grained Layer Fusion on Multi-core DNN Accelerators.
Proceedings of the IEEE International Symposium on Performance Analysis of Systems and Software, 2023

ACCO: Automated Causal CNN Scheduling Optimizer for Real-Time Edge Accelerators.
Proceedings of the 41st IEEE International Conference on Computer Design, 2023

DeFiNES: Enabling Fast Exploration of the Depth-first Scheduling Space for DNN Accelerators through Analytical Modeling.
Proceedings of the IEEE International Symposium on High-Performance Computer Architecture, 2023


SALSA: Simulated Annealing based Loop-Ordering Scheduler for DNN Accelerators.
Proceedings of the 5th IEEE International Conference on Artificial Intelligence Circuits and Systems, 2023

2022
DeFiNES: A DSE Framework Enabling Fast Exploration of the Depth-first Scheduling Space for DNN Accelerators.
Dataset, November, 2022

Taxonomy and Benchmarking of Precision-Scalable MAC Arrays Under Enhanced DNN Dataflow Representation.
IEEE Trans. Circuits Syst. I Regul. Pap., 2022

Towards Heterogeneous Multi-core Accelerators Exploiting Fine-grained Scheduling of Layer-Fused Deep Neural Networks.
CoRR, 2022

CONVOLVE: Smart and seamless design of smart edge processors.
CoRR, 2022

TinyVers: A 0.8-17 TOPS/W, 1.7 μW-20 mW, Tiny Versatile System-on-chip with State-Retentive eMRAM for Machine Learning Inference at the Extreme Edge.
Proceedings of the IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits 2022), 2022

A Uniform Latency Model for DNN Accelerators with Diverse Architectures and Dataflows.
Proceedings of the 2022 Design, Automation & Test in Europe Conference & Exhibition, 2022

2021
ZigZag: Enlarging Joint Architecture-Mapping Design Space Exploration for DNN Accelerators.
IEEE Trans. Computers, 2021

Hardware-Efficient Residual Neural Network Execution in Line-Buffer Depth-First Processing.
IEEE J. Emerg. Sel. Topics Circuits Syst., 2021

Survey and Benchmarking of Precision-Scalable MAC Arrays for Embedded DNN Processing.
CoRR, 2021

Processor Architecture Optimization for Spatially Dynamic Neural Networks.
Proceedings of the 29th IFIP/IEEE International Conference on Very Large Scale Integration, 2021

LOMA: Fast Auto-Scheduling on DNN Accelerators through Loop-Order-based Memory Allocation.
Proceedings of the 3rd IEEE International Conference on Artificial Intelligence Circuits and Systems, 2021

Analyzing the Energy-Latency-Area-Accuracy Trade-off Across Contemporary Neural Networks.
Proceedings of the 3rd IEEE International Conference on Artificial Intelligence Circuits and Systems, 2021

2020
ZigZag: A Memory-Centric Rapid DNN Accelerator Design Space Exploration Framework.
CoRR, 2020

2019
Review and Benchmarking of Precision-Scalable Multiply-Accumulate Unit Architectures for Embedded Neural-Network Processing.
IEEE J. Emerg. Sel. Topics Circuits Syst., 2019

Sub-Word Parallel Precision-Scalable MAC Engines for Efficient Embedded DNN Inference.
Proceedings of the IEEE International Conference on Artificial Intelligence Circuits and Systems, 2019


  Loading...