Yingyan (Celine) Lin

Orcid: 0000-0001-5946-203X

Affiliations:
  • Georgia Institute of Technology, Atlanta, GA, USA


According to our database1, Yingyan (Celine) Lin authored at least 116 papers between 2012 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
AutoAI2C: An Automated Hardware Generator for DNN Acceleration on Both FPGA and ASIC.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., October, 2024

An Investigation on Hardware-Aware Vision Transformer Scaling.
ACM Trans. Embed. Comput. Syst., May, 2024

Towards Efficient Neuro-Symbolic AI: From Workload Characterization to Hardware Architecture.
CoRR, 2024

MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog Generation.
CoRR, 2024

EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive Layer Tuning and Voting.
CoRR, 2024

ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization.
CoRR, 2024

Omni-Recon: Towards General-Purpose Neural Radiance Fields for Versatile 3D Applications.
CoRR, 2024

Towards Cognitive AI Systems: a Survey and Prospective on Neuro-Symbolic AI.
CoRR, 2024

Towards Cognitive AI Systems: Workload and Characterization of Neuro-Symbolic AI.
Proceedings of the IEEE International Symposium on Performance Analysis of Systems and Software, 2024

Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Omni-Recon: Harnessing Image-Based Rendering for General-Purpose Neural Radiance Fields.
Proceedings of the Computer Vision - ECCV 2024, 2024

3D-Carbon: An Analytical Carbon Modeling Tool for 3D and 2.5D Integrated Circuits.
Proceedings of the 61st ACM/IEEE Design Automation Conference, 2024

EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Unified Compression and Adaptive Layer Voting.
Proceedings of the 61st ACM/IEEE Design Automation Conference, 2024

MixRT: Mixed Neural Representations For Real-Time NeRF Rendering.
Proceedings of the International Conference on 3D Vision, 2024

2023
SmartDeal: Remodeling Deep Network Weights for Efficient Inference and Training.
IEEE Trans. Neural Networks Learn. Syst., October, 2023

NASA+: Neural Architecture Search and Acceleration for Multiplication-Reduced Hybrid Networks.
IEEE Trans. Circuits Syst. I Regul. Pap., 2023

NetDistiller: Empowering Tiny Deep Learning via In Situ Distillation.
IEEE Micro, 2023

EyeCoD: Eye Tracking System Acceleration via FlatCam-Based Algorithm and Hardware Co-Design.
IEEE Micro, 2023

Master-ASR: Achieving Multilingual Scalability and Low-Resource Adaptation in ASR with Modular Learning.
CoRR, 2023

A Survey on Graph Neural Network Acceleration: Algorithms, Systems, and Customized Hardware.
CoRR, 2023

Memory-Based Computing for Energy-Efficient AI: Grand Challenges.
Proceedings of the 31st IFIP/IEEE International Conference on Very Large Scale Integration, 2023

ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Instant-3D: Instant Neural Radiance Field Training Towards On-Device AR/VR 3D Reconstruction.
Proceedings of the 50th Annual International Symposium on Computer Architecture, 2023

Gen-NeRF: Efficient and Generalizable Neural Radiance Fields via Algorithm-Hardware Co-Design.
Proceedings of the 50th Annual International Symposium on Computer Architecture, 2023

Master-ASR: Achieving Multilingual Scalability and Low-Resource Adaptation in ASR with Modular Learning.
Proceedings of the International Conference on Machine Learning, 2023

NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations.
Proceedings of the International Conference on Machine Learning, 2023

GPT4AIGChip: Towards Next-Generation AI Accelerator Design Automation via Large Language Models.
Proceedings of the IEEE/ACM International Conference on Computer Aided Design, 2023

ERSAM: Neural Architecture Search for Energy-Efficient and Real-Time Social Ambiance Measurement.
Proceedings of the IEEE International Conference on Acoustics, 2023

ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design.
Proceedings of the IEEE International Symposium on High-Performance Computer Architecture, 2023

ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention.
Proceedings of the IEEE International Symposium on High-Performance Computer Architecture, 2023

Instant-NeRF: Instant On-Device Neural Radiance Field Training via Algorithm-Accelerator Co-Designed Near-Memory Processing.
Proceedings of the 60th ACM/IEEE Design Automation Conference, 2023

NetBooster: Empowering Tiny Deep Learning By Standing on the Shoulders of Deep Giants.
Proceedings of the 60th ACM/IEEE Design Automation Conference, 2023

Robust Tickets Can Transfer Better: Drawing More Transferable Subnetworks in Transfer Learning.
Proceedings of the 60th ACM/IEEE Design Automation Conference, 2023

Hint-Aug: Drawing Hints from Foundation Vision Transformers towards Boosted Few-shot Parameter-Efficient Tuning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention at Vision Transformer Inference.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Auto-CARD: Efficient and Robust Codec Avatar Driving for Real-time Mobile Telepresence.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
DANCE: DAta-Network Co-optimization for Efficient Segmentation Model Training and Inference.
ACM Trans. Design Autom. Electr. Syst., 2022

Max-Affine Spline Insights Into Deep Network Pruning.
Trans. Mach. Learn. Res., 2022

RT-RCG: Neural Network and Accelerator Search Towards Effective and Real-time ECG Reconstruction from Intracardiac Electrograms.
ACM J. Emerg. Technol. Comput. Syst., 2022

Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference.
CoRR, 2022

LDP: Learnable Dynamic Precision for Efficient Deep Neural Network Training and Inference.
CoRR, 2022

e-G2C: A 0.14-to-8.31 µJ/Inference NN-based Processor with Continuous On-chip Adaptation for Anomaly Detection and ECG Conversion from EGM.
Proceedings of the IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits 2022), 2022

i-FlatCam: A 253 FPS, 91.49 µJ/Frame Ultra-Compact Intelligent Lensless Camera for Real-Time and Efficient Eye Tracking in VR/AR.
Proceedings of the IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits 2022), 2022

Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling.
Proceedings of the Fifth Conference on Machine Learning and Systems, 2022

EyeCoD: eye tracking system acceleration via flatcam-based algorithm & accelerator co-design.
Proceedings of the ISCA '22: The 49th Annual International Symposium on Computer Architecture, New York, New York, USA, June 18, 2022

ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks.
Proceedings of the International Conference on Machine Learning, 2022

DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks.
Proceedings of the International Conference on Machine Learning, 2022

PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Proceedings of the Tenth International Conference on Learning Representations, 2022

NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks.
Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design, 2022

RT-NeRF: Real-Time On-Device Neural Radiance Fields Towards Immersive AR/VR Rendering.
Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design, 2022

GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design.
Proceedings of the IEEE International Symposium on High-Performance Computer Architecture, 2022

A Framework for Neural Network Inference on FPGA-Centric SmartNICs.
Proceedings of the 32nd International Conference on Field-Programmable Logic and Applications, 2022

FCsN: A FPGA-Centric SmartNIC Framework for Neural Networks.
Proceedings of the 30th IEEE Annual International Symposium on Field-Programmable Custom Computing Machines, 2022

SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning.
Proceedings of the Computer Vision - ECCV 2022, 2022

INGeo: Accelerating Instant Neural Scene Reconstruction with Noisy Geometry Priors.
Proceedings of the Computer Vision - ECCV 2022 Workshops, 2022

Contrastive quant: quantization makes stronger contrastive learning.
Proceedings of the DAC '22: 59th ACM/IEEE Design Automation Conference, San Francisco, California, USA, July 10, 2022

MIA-Former: Efficient and Robust Vision Transformers via Multi-Grained Input-Adaptation.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

Early-Bird GCNs: Graph-Network Co-optimization towards More Efficient GCN Training and Inference via Drawing Early-Bird Lottery Tickets.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

2021
AdaDeep: A Usage-Driven, Automated Deep Model Compression Framework for Enabling Ubiquitous Intelligent Mobiles.
IEEE Trans. Mob. Comput., 2021

ASTRO: A System for Off-grid Networked Drone Sensing Missions.
ACM Trans. Internet Things, 2021

Practical Attacks on Deep Neural Networks by Memory Trojaning.
IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., 2021

Introduction to the Special Issue on Hardware and Algorithms for Efficient Machine Learning - Part 2.
ACM J. Emerg. Technol. Comput. Syst., 2021

Introduction of Special Issue on Hardware and Algorithms for Efficient Machine Learning-Part 1.
ACM J. Emerg. Technol. Comput. Syst., 2021

FBNetV5: Neural Architecture Search for Multiple Tasks in One Run.
CoRR, 2021

HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark.
CoRR, 2021

GEBT: Drawing Early-Bird Tickets in Graph Convolutional Network Training.
CoRR, 2021

Max-Affine Spline Insights Into Deep Network Pruning.
CoRR, 2021

SmartDeal: Re-Modeling Deep Network Weights for Efficient Inference and Training.
CoRR, 2021

Locality Sensitive Teaching.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization.
Proceedings of the MICRO '21: 54th Annual IEEE/ACM International Symposium on Microarchitecture, 2021

2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency.
Proceedings of the MICRO '21: 54th Annual IEEE/ACM International Symposium on Microarchitecture, 2021

DIAN: Differentiable Accelerator-Network Co-Search Towards Maximal DNN Efficiency.
Proceedings of the IEEE/ACM International Symposium on Low Power Electronics and Design, 2021

Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators.
Proceedings of the 38th International Conference on Machine Learning, 2021

Double-Win Quant: Aggressively Winning Robustness of Quantized Deep Neural Networks via Random Precision Training and Inference.
Proceedings of the 38th International Conference on Machine Learning, 2021

HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark.
Proceedings of the 9th International Conference on Learning Representations, 2021

CPT: Efficient Deep Neural Network Training via Cyclic Precision.
Proceedings of the 9th International Conference on Learning Representations, 2021

SACoD: Sensor Algorithm Co-Design Towards Efficient CNN-powered Intelligent PhlatCam.
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, 2021

G-CoS: GNN-Accelerator Co-Search Towards Both Better Accuracy and Efficiency.
Proceedings of the IEEE/ACM International Conference On Computer Aided Design, 2021

O-HAS: Optical Hardware Accelerator Search for Boosting Both Acceleration Performance and Development Speed.
Proceedings of the IEEE/ACM International Conference On Computer Aided Design, 2021

Toward reconfigurable kernel datapaths with learned optimizations.
Proceedings of the HotOS '21: Workshop on Hot Topics in Operating Systems, 2021

A3C-S: Automated Agent Accelerator Co-Search towards Efficient Deep Reinforcement Learning.
Proceedings of the 58th ACM/IEEE Design Automation Conference, 2021

InstantNet: Automated Generation and Deployment of Instantaneously Switchable-Precision Networks.
Proceedings of the 58th ACM/IEEE Design Automation Conference, 2021

2020
Dual Dynamic Inference: Enabling More Efficient, Adaptive, and Controllable Deep Inference.
IEEE J. Sel. Top. Signal Process., 2020

Auto-Agent-Distiller: Towards Efficient Deep Reinforcement Learning Agents via Neural Architecture Search.
CoRR, 2020

DNA: Differentiable Network-Accelerator Co-Search.
CoRR, 2020

AdaDeep: A Usage-Driven, Automated Deep Model Compression Framework for Enabling Ubiquitous Intelligent Mobiles.
CoRR, 2020

Bringing Powerful Machine-Learning Systems to Daily-Life Devices via Algorithm-Hardware Co-Design.
Proceedings of the 2020 International Symposium on VLSI Design, Automation and Test, 2020

ShiftAddNet: A Hardware-Inspired Deep Network.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

A New MRAM-Based Process In-Memory Accelerator for Efficient Neural Network Training with Floating Point Precision.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2020

SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation.
Proceedings of the 47th ACM/IEEE Annual International Symposium on Computer Architecture, 2020

Timely: Pushing Data Movements And Interfaces In Pim Accelerators Towards Local And In Time Domain.
Proceedings of the 47th ACM/IEEE Annual International Symposium on Computer Architecture, 2020

AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks.
Proceedings of the 37th International Conference on Machine Learning, 2020

Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks.
Proceedings of the 8th International Conference on Learning Representations, 2020

DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures.
Proceedings of the 2020 IEEE International Conference on Acoustics, 2020

AutoDNNchip: An Automated DNN Chip Predictor and Builder for Both FPGAs and ASICs.
Proceedings of the FPGA '20: The 2020 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 2020

HALO: Hardware-Aware Learning to Optimize.
Proceedings of the Computer Vision - ECCV 2020, 2020

Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

2019
E2-Train: Energy-Efficient Deep Network Training with Data-, Model-, and Algorithm-Level Saving.
CoRR, 2019

Drawing early-bird tickets: Towards more efficient training of deep networks.
CoRR, 2019

E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Live Demonstration: Bringing Powerful Deep Learning into Daily-Life Devices (Mobiles and FPGAs) Via Deep k-Means.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2019

2018
A Rank Decomposed Statistical Error Compensation Technique for Robust Convolutional Neural Networks in the Near Threshold Voltage Regime.
J. Signal Process. Syst., 2018

ASTRO: Autonomous, Sensing, and Tetherless netwoRked drOnes.
Proceedings of the 4th ACM Workshop on Micro Aerial Vehicle Networks, 2018

On-Demand Deep Model Compression for Mobile Devices: A Usage-Driven Model Selection Framework.
Proceedings of the 16th Annual International Conference on Mobile Systems, 2018

Energy-efficient Convolutional Neural Networks via Statistical Error Compensated Near Threshold Computing.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2018

Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions.
Proceedings of the 35th International Conference on Machine Learning, 2018

2017
Energy-efficient systems for information transfer and processing
PhD thesis, 2017

PredictiveNet: An energy-efficient convolutional neural network via zero prediction.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2017

2016
A Study of BER-Optimal ADC-Based Receiver for Serial Links.
IEEE Trans. Circuits Syst. I Regul. Pap., 2016

Variation-Tolerant Architectures for Convolutional Neural Networks in the Near Threshold Voltage Regime.
Proceedings of the 2016 IEEE International Workshop on Signal Processing Systems, 2016

2012
A fully automated technique for constructing FSM abstractions of non-ideal latches in communication systems.
Proceedings of the 2012 IEEE International Conference on Acoustics, 2012


  Loading...