Anirbit Mukherjee

Orcid: 0000-0001-5189-8939

According to our database1, Anirbit Mukherjee authored at least 20 papers between 2017 and 2024.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Size Lowerbounds for Deep Operator Networks.
Trans. Mach. Learn. Res., 2024

Global Convergence of SGD For Logistic Loss on Two Layer Neural Nets.
Trans. Mach. Learn. Res., 2024

Improving PINNs By Algebraic Inclusion of Boundary and Initial Conditions.
CoRR, 2024

Regularized Gradient Clipping Provably Trains Wide and Deep Neural Networks.
CoRR, 2024

2023
Depth-2 neural networks under a data-poisoning attack.
Neurocomputing, May, 2023

Investigating the Ability of PINNs To Solve Burgers' PDE Near Finite-Time BlowUp.
CoRR, 2023

LIPEx - Locally Interpretable Probabilistic Explanations - To Look Beyond The True Class.
CoRR, 2023

2022
Provable training of a ReLU gate with an iterative non-gradient algorithm.
Neural Networks, 2022

Global Convergence of SGD On Two Layer Neural Nets.
CoRR, 2022

Capacity Bounds for the DeepONet Method of Solving Differential Equations.
CoRR, 2022

An Empirical Study of the Occurrence of Heavy-Tails in Training a ReLU Gate.
CoRR, 2022

2021
Investigating the locality of neural network training dynamics.
CoRR, 2021

A Study of the Mathematics of Deep Learning.
CoRR, 2021

2020
A Study of Neural Training with Non-Gradient and Noise Assisted Gradient Methods.
CoRR, 2020

Guarantees on learning depth-2 neural networks under a data-poisoning attack.
CoRR, 2020

2018
Convergence guarantees for RMSProp and ADAM in non-convex optimization and their comparison to Nesterov acceleration on autoencoders.
CoRR, 2018

Sparse Coding and Autoencoders.
Proceedings of the 2018 IEEE International Symposium on Information Theory, 2018

2017
Lower bounds over Boolean inputs for deep neural networks with ReLU gates.
Electron. Colloquium Comput. Complex., 2017

Understanding Deep Neural Networks with Rectified Linear Units.
Electron. Colloquium Comput. Complex., 2017

Critical Points Of An Autoencoder Can Provably Recover Sparsely Used Overcomplete Dictionaries.
CoRR, 2017


  Loading...