Yeonjong Shin

Orcid: 0000-0003-4577-1979

According to our database1, Yeonjong Shin authored at least 25 papers between 2016 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
On the Training and Generalization of Deep Operator Networks.
SIAM J. Sci. Comput., 2024

A Comprehensive Review of Latent Space Dynamics Identification Algorithms for Intrusive and Non-Intrusive Reduced-Order-Modeling.
CoRR, 2024

tLaSDI: Thermodynamics-informed latent space dynamics identification.
CoRR, 2024

2023
Accelerating gradient descent and Adam via fractional gradients.
Neural Networks, April, 2023

Randomized Forward Mode of Automatic Differentiation for Optimization Algorithms.
CoRR, 2023

2022
Active Neuron Least Squares: A Training Method for Multivariate Rectified Neural Networks.
SIAM J. Sci. Comput., August, 2022

Approximation rates of DeepONets for learning operators arising from advection-diffusion equations.
Neural Networks, 2022

Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions.
Neurocomputing, 2022

S-OPT: A Points Selection Algorithm for Hyper-Reduction in Reduced Order Models.
CoRR, 2022

2021
Plateau Phenomenon in Gradient Descent Training of RELU Networks: Explanation, Quantification, and Avoidance.
SIAM J. Sci. Comput., 2021

GFINNs: GENERIC Formalism Informed Neural Networks for Deterministic and Stochastic Dynamical Systems.
CoRR, 2021

A Caputo fractional derivative-based algorithm for optimization.
CoRR, 2021

Convergence rate of DeepONets for learning operators arising from advection-diffusion equations.
CoRR, 2021

2020
Error estimates of residual minimization using neural networks for linear PDEs.
CoRR, 2020

On the Convergence and generalization of Physics Informed Neural Networks.
CoRR, 2020

2019
Effects of Depth, Width, and Initialization: A Convergence Analysis of Layer-wise Training for Deep Linear Neural Networks.
CoRR, 2019

Trainability and Data-dependent Initialization of Over-parameterized ReLU Neural Networks.
CoRR, 2019

Dying ReLU and Initialization: Theory and Numerical Examples.
CoRR, 2019

2018
Sequential function approximation with noisy data.
J. Comput. Phys., 2018

2017
Sparse Approximation using ℓ<sub>1-ℓ<sub>2</sub></sub> Minimization and Its Application to Stochastic Collocation.
SIAM J. Sci. Comput., 2017

A Randomized Tensor Quadrature Method for High Dimensional Polynomial Approximation.
SIAM J. Sci. Comput., 2017

A Randomized Algorithm for Multivariate Function Approximation.
SIAM J. Sci. Comput., 2017

2016
Correcting Data Corruption Errors for Multivariate Function Approximation.
SIAM J. Sci. Comput., 2016

Nonadaptive Quasi-Optimal Points Selection for Least Squares Linear Regression.
SIAM J. Sci. Comput., 2016

On a near optimal sampling strategy for least squares polynomial regression.
J. Comput. Phys., 2016


  Loading...