Souvik Kundu

Orcid: 0000-0002-3533-9405

Affiliations:
  • University of Southern California, Los Angeles, CA, USA


According to our database1, Souvik Kundu authored at least 56 papers between 2018 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Understanding the Performance and Estimating the Cost of LLM Fine-Tuning.
CoRR, 2024

MaskVD: Region Masking for Efficient Video Object Detection.
CoRR, 2024

CiMNet: Towards Joint Optimization for DNN Architecture and Configuration for Compute-In-Memory Hardware.
CoRR, 2024

Linearizing Models for Efficient yet Robust Private Inference.
CoRR, 2024

Recent Advances in Scalable Energy-Efficient and Trustworthy Spiking Neural Networks: from Algorithms to Technology.
Proceedings of the IEEE International Conference on Acoustics, 2024

Sensi-Bert: Towards Sensitivity Driven Fine-Tuning for Parameter-Efficient Language Model.
Proceedings of the IEEE International Conference on Acoustics, 2024

LaMDA: Large Model Fine-Tuning via Spectrally Decomposed Low-Dimensional Adaptation.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2024, 2024

Block Selective Reprogramming for On-device Training of Vision Transformers.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

RLNet: Robust Linearized Networks for Efficient Private Inference.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

DIA: Diffusion based Inverse Network Attack on Collaborative Inference.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, 2024

2023
Overcoming Resource Constraints in Federated Learning: Large Models Can Be Trained with only Weak Clients.
Trans. Mach. Learn. Res., 2023

Revisiting Sparsity Hunting in Federated Learning: Why does Sparsity Consensus Matter?
Trans. Mach. Learn. Res., 2023

Sensi-BERT: Towards Sensitivity Driven Fine-Tuning for Parameter-Efficient BERT.
CoRR, 2023

C2PI: An Efficient Crypto-Clear Two-Party Neural Network Private Inference.
CoRR, 2023

FLOAT: Fast Learnable Once-for-All Adversarial Training for Tunable Trade-off between Accuracy and Robustness.
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023

Self-Attentive Pooling for Efficient Deep Learning.
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023

ViTA: A Vision Transformer Inference Accelerator for Edge Applications.
Proceedings of the IEEE International Symposium on Circuits and Systems, 2023

Learning to Linearize Deep Neural Networks for Secure and Efficient Private Inference.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

InstaTune: Instantaneous Neural Architecture Search During Fine-Tuning.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

FireFly: A Synthetic Dataset for Ember Detection in Wildfire.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

SAL-ViT: Towards Latency Efficient Private Inference on ViT using Selective Attention Search with a Learnable Softmax Approximation.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

RNA-ViT: Reduced-Dimension Approximate Normalized Attention Vision Transformers for Latency Efficient Private Inference.
Proceedings of the IEEE/ACM International Conference on Computer Aided Design, 2023

Quantpipe: Applying Adaptive Post-Training Quantization For Distributed Transformer Pipelines In Dynamic Edge Environments.
Proceedings of the IEEE International Conference on Acoustics, 2023

Sparse Mixture Once-for-all Adversarial Training for Efficient in-situ Trade-off between Accuracy and Robustness of DNNs.
Proceedings of the IEEE International Conference on Acoustics, 2023

In-Sensor & Neuromorphic Computing Are all You Need for Energy Efficient Computer Vision.
Proceedings of the IEEE International Conference on Acoustics, 2023

Technology-Circuit-Algorithm Tri-Design for Processing-in-Pixel-in-Memory (P2M).
Proceedings of the Great Lakes Symposium on VLSI 2023, 2023

C<sup>2</sup>PI: An Efficient Crypto-Clear Two-Party Neural Network Private Inference.
Proceedings of the 60th ACM/IEEE Design Automation Conference, 2023

Making Models Shallow Again: Jointly Learning to Reduce Non-Linearity and Depth for Latency-Efficient Private Inference.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
Toward Adversary-aware Non-iterative Model Pruning through Dynamic Network Rewiring of DNNs.
ACM Trans. Embed. Comput. Syst., September, 2022

Federated Learning of Large Models at the Edge via Principal Sub-Model Training.
CoRR, 2022

Federated Sparse Training: Lottery Aware Model Compression for Resource Constrained Edge.
CoRR, 2022

A Fast and Efficient Conditional Learning for Tunable Trade-Off between Accuracy and Robustness.
CoRR, 2022

P2M: A Processing-in-Pixel-in-Memory Paradigm for Resource-Constrained TinyML Applications.
CoRR, 2022

P<sup>2</sup>M-DeTrack: Processing-in-Pixel-in-Memory for Energy-efficient and Real-Time Multi-Object Detection and Tracking.
Proceedings of the 30th IFIP/IEEE 30th International Conference on Very Large Scale Integration, 2022

PipeEdge: Pipeline Parallelism for Large-Scale Model Inference on Heterogeneous Edge Devices.
Proceedings of the 25th Euromicro Conference on Digital System Design, 2022

BMPQ: Bit-Gradient Sensitivity-Driven Mixed-Precision Quantization of DNNs from Scratch.
Proceedings of the 2022 Design, Automation & Test in Europe Conference & Exhibition, 2022

2021
Pipeline Parallelism for Inference on Heterogeneous Edge Computing.
CoRR, 2021

Towards Low-Latency Energy-Efficient Deep SNNs via Attention-Guided Compression.
CoRR, 2021

HYPER-SNN: Towards Energy-efficient Quantized Deep Spiking Neural Networks for Hyperspectral Image Classification.
CoRR, 2021

Spike-Thrift: Towards Energy-Efficient Deep Spiking Neural Networks by Limiting Spiking Activity via Attention-Guided Compression.
Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 2021

Analyzing the Confidentiality of Undistillable Teachers in Knowledge Distillation.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Training Energy-Efficient Deep Spiking Neural Networks with Single-Spike Hybrid Input Encoding.
Proceedings of the International Joint Conference on Neural Networks, 2021

HIRE-SNN: Harnessing the Inherent Robustness of Energy-Efficient Deep Spiking Neural Networks by Training with Crafted Input Noise.
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, 2021

AttentionLite: Towards Efficient Self-Attention Models for Vision.
Proceedings of the IEEE International Conference on Acoustics, 2021

DNR: A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of DNNs.
Proceedings of the ASPDAC '21: 26th Asia and South Pacific Design Automation Conference, 2021

2020
Pre-Defined Sparsity for Low-Complexity Convolutional Neural Networks.
IEEE Trans. Computers, 2020

Attention-based Image Upsampling.
CoRR, 2020

A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of DNNs.
CoRR, 2020

qBSA: Logic Design of a 32-bit Block-Skewed RSFQ Arithmetic Logic Unit.
CoRR, 2020

2019
Metastability-Resilient Synchronization FIFO for SFQ Logic.
CoRR, 2019

A Pre-defined Sparse Kernel Based Convolution for Deep CNNs.
CoRR, 2019

CSrram: Area-Efficient Low-Power Ex-Situ Training Framework for Memristive Neuromorphic Circuits Based on Clustered Sparsity.
Proceedings of the 2019 IEEE Computer Society Annual Symposium on VLSI, 2019

pSConv: A Pre-defined S parse Kernel Based Convolution for Deep CNNs.
Proceedings of the 57th Annual Allerton Conference on Communication, 2019

2018
SpRRAM: A Predefined Sparsity Based Memristive Neuromorphic Circuit for Low Power Application.
CoRR, 2018

A Highly Parallel FPGA Implementation of Sparse Neural Network Training.
Proceedings of the 2018 International Conference on ReConFigurable Computing and FPGAs, 2018


  Loading...