Masahito Ueda

Orcid: 0000-0002-5367-1436

According to our database1, Masahito Ueda authored at least 27 papers between 2003 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Symbolic equation solving via reinforcement learning.
Neurocomputing, 2025

2023
Law of Balance and Stationary Distribution of Stochastic Gradient Descent.
CoRR, 2023

The Probabilistic Stability of Stochastic Gradient Descent.
CoRR, 2023

What shapes the loss landscape of self supervised learning?
Proceedings of the Eleventh International Conference on Learning Representations, 2023

2022
Three Learning Stages and Accuracy-Efficiency Tradeoff of Restricted Boltzmann Machines.
CoRR, 2022

Exact Phase Transitions in Deep Learning.
CoRR, 2022

Stochastic Neural Networks with Infinite Width are Deterministic.
CoRR, 2022

Interplay between depth of neural networks and locality of target functions.
CoRR, 2022

Power-Law Escape Rate of SGD.
Proceedings of the International Conference on Machine Learning, 2022

Convergent and Efficient Deep Q Learning Algorithm.
Proceedings of the Tenth International Conference on Learning Representations, 2022

SGD Can Converge to Local Maxima.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Strength of Minibatch Noise in SGD.
Proceedings of the Tenth International Conference on Learning Representations, 2022

2021
SGD May Never Escape Saddle Points.
CoRR, 2021

A Convergent and Efficient Deep Q Network Algorithm.
CoRR, 2021

Logarithmic landscape and power-law escape rate of SGD.
CoRR, 2021

On Minibatch Noise: Discrete-Time SGD, Overparametrization, and Bayes.
CoRR, 2021

Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent.
Proceedings of the 38th International Conference on Machine Learning, 2021

2020
Stochastic Gradient Descent with Large Learning Rate.
CoRR, 2020

Improved generalization by noise enhancement.
CoRR, 2020

Is deeper better? It depends on locality of relevant features.
CoRR, 2020

Volumization as a Natural Generalization of Weight Decay.
CoRR, 2020

Learning Not to Learn in the Presence of Noisy Labels.
CoRR, 2020

LaProp: a Better Way to Combine Momentum with Adaptive Gradient.
CoRR, 2020

Neural Networks Fail to Learn Periodic Functions and How to Fix It.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

2019
Deep Reinforcement Learning Control of Quantum Cartpoles.
CoRR, 2019

Deep Gamblers: Learning to Abstain with Portfolio Theory.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

2003
Einstein-Podolsky-Rosen correlation seen from moving observers.
Quantum Inf. Comput., 2003


  Loading...