Arnulf Jentzen
Orcid: 0000-0002-9840-3339
According to our database1,
Arnulf Jentzen
authored at least 86 papers
between 2009 and 2025.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
Online presence:
-
on zbmath.org
-
on orcid.org
-
on id.loc.gov
-
on d-nb.info
-
on ajentzen.de
On csauthors.net:
Bibliography
2025
Averaged Adam accelerates stochastic optimization in the training of deep neural network approximations for partial differential equation and optimal control problems.
CoRR, January, 2025
2024
Weak convergence rates for temporal numerical approximations of the semilinear stochastic wave equation with multiplicative noise.
Numerische Mathematik, December, 2024
Gradient Descent Provably Escapes Saddle Points in the Training of Shallow ReLU Networks.
J. Optim. Theory Appl., December, 2024
On the Existence of Minimizers in Shallow Residual ReLU Neural Network Optimization Landscapes.
SIAM J. Numer. Anal., 2024
Non-convergence to global minimizers in data driven supervised deep learning: Adam and stochastic gradient descent optimization provably fail to converge to global minimizers in the training of deep neural networks with ReLU activation.
CoRR, 2024
An Overview on Machine Learning Methods for Partial Differential Equations: from Physics Informed Neural Networks to Deep Operator Learning.
CoRR, 2024
Non-convergence of Adam and other adaptive stochastic gradient descent optimization methods for non-vanishing learning rates.
CoRR, 2024
Learning rate adaptive stochastic gradient descent optimization methods: numerical simulations for deep learning methods for partial differential equations and convergence analyses.
CoRR, 2024
Deep neural networks with ReLU, leaky ReLU, and softplus activation provably overcome the curse of dimensionality for space-time solutions of semilinear partial differential equations.
CoRR, 2024
Non-convergence to global minimizers for Adam and stochastic gradient descent optimization and constructions of local minimizers in the training of artificial neural networks.
CoRR, 2024
2023
Commun. Nonlinear Sci. Numer. Simul., November, 2023
Lower bounds for artificial neural network approximations: A proof that shallow neural networks fail to overcome the curse of dimensionality.
J. Complex., August, 2023
Space-time error estimates for deep neural network approximations for differential equations.
Adv. Comput. Math., February, 2023
Overcoming the curse of dimensionality in the numerical approximation of backward stochastic differential equations.
J. Num. Math., 2023
CoRR, 2023
Deep neural networks with ReLU, leaky ReLU, and softplus activation provably overcome the curse of dimensionality for Kolmogorov partial differential equations with Lipschitz nonlinearities in the L<sup>p</sup>-sense.
CoRR, 2023
Algorithmically Designed Artificial Neural Networks (ADANNs): Higher order deep operator learning for parametric partial differential equations.
CoRR, 2023
The necessity of depth for artificial neural networks to approximate certain classes of smooth and bounded functions without the curse of dimensionality.
CoRR, 2023
Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation.
Appl. Math. Comput., 2023
2022
IEEE Trans. Neural Networks Learn. Syst., 2022
Landscape Analysis for Shallow Neural Networks: Complete Classification of Critical Points for Affine Target Functions.
J. Nonlinear Sci., 2022
A proof of convergence for the gradient descent optimization method with random initializations in the training of neural networks with ReLU activation for piecewise linear target functions.
J. Mach. Learn. Res., 2022
A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions.
J. Complex., 2022
Overcoming the Curse of Dimensionality in the Numerical Approximation of Parabolic Partial Differential Equations with Gradient-Dependent Nonlinearities.
Found. Comput. Math., 2022
Normalized gradient flow optimization in the training of ReLU artificial neural networks.
CoRR, 2022
On bounds for norms of reparameterized ReLU artificial neural network parameters: sums of fractional powers of the Lipschitz norm control the network parameter vector.
CoRR, 2022
Deep learning approximations for non-local nonlinear PDEs with Neumann boundary conditions.
CoRR, 2022
Learning the random variables in Monte Carlo simulations with stochastic gradient descent: Machine learning for parametric PDEs and financial derivative pricing.
CoRR, 2022
2021
Non-convergence of stochastic gradient descent in the training of deep neural networks.
J. Complex., 2021
Weak Convergence Rates for Euler-Type Approximations of Semilinear Stochastic Evolution Equations with Nonlinear Diffusion Coefficients.
Found. Comput. Math., 2021
On the existence of global minima and convergence analyses for gradient descent methods in the training of deep neural networks.
CoRR, 2021
Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions.
CoRR, 2021
Strong L<sup>p</sup>-error analysis of nonlinear Monte Carlo approximations for high-dimensional semilinear partial differential equations.
CoRR, 2021
Existence, uniqueness, and convergence rates for gradient flows in the training of artificial neural networks with ReLU activation.
CoRR, 2021
Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation.
CoRR, 2021
A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions.
CoRR, 2021
Landscape analysis for shallow ReLU neural networks: complete classification of critical points for affine target functions.
CoRR, 2021
Full history recursive multilevel Picard approximations for ordinary differential equations with expectations.
CoRR, 2021
Convergence rates for gradient descent in the training of overparameterized artificial neural networks with biases.
CoRR, 2021
2020
Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black-Scholes Partial Differential Equations.
SIAM J. Math. Data Sci., 2020
Exponential moment bounds and strong convergence rates for tamed-truncated numerical approximations of stochastic convolutions.
Numer. Algorithms, 2020
Overcoming the curse of dimensionality in the numerical approximation of Allen-Cahn partial differential equations via truncated full-history recursive multilevel Picard approximations.
J. Num. Math., 2020
Convergence Rates for the Stochastic Gradient Descent Method for Non-Convex Objective Functions.
J. Mach. Learn. Res., 2020
Lower error bounds for the stochastic gradient descent optimization algorithm: Sharp convergence rates for slowly and fast decaying learning rates.
J. Complex., 2020
An overview on deep learning-based approximation methods for partial differential equations.
CoRR, 2020
Strong overall error analysis for the training of artificial neural networks via random initializations.
CoRR, 2020
High-dimensional approximation spaces of artificial neural networks and applications to partial differential equations.
CoRR, 2020
Deep learning based numerical approximation algorithms for stochastic partial differential equations and high-dimensional nonlinear filtering problems.
CoRR, 2020
Nonlinear Monte Carlo methods with polynomial runtime for high-dimensional iterated nested expectations.
CoRR, 2020
Multilevel Picard approximations for high-dimensional semilinear second-order PDEs with Lipschitz nonlinearities.
CoRR, 2020
Algorithms for Solving High Dimensional PDEs: From Nonlinear Monte Carlo to Machine Learning.
CoRR, 2020
CoRR, 2020
Space-time deep neural network approximations for high-dimensional partial differential equations.
CoRR, 2020
Numerical simulations for full history recursive multilevel Picard approximations for systems of high-dimensional partial differential equations.
CoRR, 2020
Overcoming the curse of dimensionality in the numerical approximation of high-dimensional semilinear elliptic partial differential equations.
CoRR, 2020
2019
On Multilevel Picard Numerical Approximations for High-Dimensional Nonlinear Parabolic Partial Differential Equations and High-Dimensional Nonlinear Backward Stochastic Differential Equations.
J. Sci. Comput., 2019
Machine Learning Approximation Algorithms for High-Dimensional Fully Nonlinear Partial Differential Equations and Second-order Backward Stochastic Differential Equations.
J. Nonlinear Sci., 2019
On arbitrarily slow convergence rates for strong numerical approximations of Cox-Ingersoll-Ross processes and squared Bessel processes.
Finance Stochastics, 2019
CoRR, 2019
Uniform error estimates for artificial neural network approximations for heat equations.
CoRR, 2019
Strong convergence rates on the whole probability space for space-time discrete numerical approximation schemes for stochastic Burgers equations.
CoRR, 2019
Towards a regularity theory for ReLU networks - chain rule and global error estimates.
CoRR, 2019
2018
Exponential integrability properties of numerical approximation processes for nonlinear stochastic differential equations.
Math. Comput., 2018
A proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients.
CoRR, 2018
A proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations.
CoRR, 2018
Solving stochastic differential equations and Kolmogorov equations by means of deep learning.
CoRR, 2018
2017
Overcoming the curse of dimensionality: Solving high-dimensional partial differential equations using deep learning.
CoRR, 2017
Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations.
CoRR, 2017
2016
2015
2013
SIAM J. Numer. Anal., 2013
2011
SIAM J. Numer. Anal., 2011
Found. Comput. Math., 2011
2010
An Improved Maximum Allowable Transfer Interval for L<sup>p</sup> -Stability of Networked Control Systems.
IEEE Trans. Autom. Control., 2010
2009
Pathwise approximation of stochastic differential equations on domains: higher order convergence rates without global Lipschitz coefficients.
Numerische Mathematik, 2009