Albert S. Berahas

Orcid: 0000-0002-2371-9398

According to our database1, Albert S. Berahas authored at least 25 papers between 2014 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
First- and second-order high probability complexity bounds for trust-region methods with noisy oracles.
Math. Program., September, 2024

Non-Uniform Smoothness for Gradient Descent.
Trans. Mach. Learn. Res., 2024

Second-order Information Promotes Mini-Batch Robustness in Variance-Reduced Gradients.
CoRR, 2024

2023
Accelerating stochastic sequential quadratic programming for equality constrained optimization using predictive variance reduction.
Comput. Optim. Appl., September, 2023

Full-low evaluation methods for derivative-free optimization.
Optim. Methods Softw., March, 2023

Collaborative and Distributed Bayesian Optimization via Consensus: Showcasing the Power of Collaboration for Optimal Design.
CoRR, 2023

2022
Quasi-Newton methods for machine learning: forget the past, just sample.
Optim. Methods Softw., 2022

Limited-memory BFGS with displacement aggregation.
Math. Program., 2022

A Theoretical and Empirical Comparison of Gradient Approximations in Derivative-Free Optimization.
Found. Comput. Math., 2022

2021
On the Convergence of Nested Decentralized Gradient Methods With Multiple Consensus and Gradient Steps.
IEEE Trans. Signal Process., 2021

Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise.
SIAM J. Optim., 2021

Sequential Quadratic Optimization for Nonlinear Equality Constrained Stochastic Optimization.
SIAM J. Optim., 2021

SONIA: A Symmetric Blockwise Truncated Optimization Algorithm.
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021

2020
A robust multi-batch L-BFGS method for machine learning.
Optim. Methods Softw., 2020

An investigation of Newton-Sketch and subsampled Newton methods.
Optim. Methods Softw., 2020

Scaling Up Quasi-newton Algorithms: Communication Efficient Distributed SR1.
Proceedings of the Machine Learning, Optimization, and Data Science, 2020

Finite Difference Neural Networks: Fast Prediction of Partial Differential Equations.
Proceedings of the 19th IEEE International Conference on Machine Learning and Applications, 2020

2019
Balancing Communication and Computation in Distributed Optimization.
IEEE Trans. Autom. Control., 2019

Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods.
SIAM J. Optim., 2019

Quasi-Newton Methods for Deep Learning: Forget the Past, Just Sample.
CoRR, 2019

Nested Distributed Gradient Methods with Adaptive Quantized Communication.
Proceedings of the 58th IEEE Conference on Decision and Control, 2019

2016
adaQN: An Adaptive Quasi-Newton Algorithm for Training RNNs.
Proceedings of the Machine Learning and Knowledge Discovery in Databases, 2016

A Multi-Batch L-BFGS Method for Machine Learning.
Proceedings of the Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, 2016

Multi-model robust error correction for face recognition.
Proceedings of the 2016 IEEE International Conference on Image Processing, 2016

2014
Sparse representation and least squares-based classification in face recognition.
Proceedings of the 22nd European Signal Processing Conference, 2014


  Loading...