Aryan Mokhtari

Orcid: 0000-0001-6603-0091

According to our database1, Aryan Mokhtari authored at least 123 papers between 2013 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Statistical and Computational Complexities of BFGS Quasi-Newton Method for Generalized Linear Models.
Trans. Mach. Learn. Res., 2024

Convergence Analysis of Adaptive Gradient Methods under Refined Smoothness and Noise Assumptions.
CoRR, 2024

Adaptive and Optimal Second-order Optimistic Methods for Minimax Optimization.
CoRR, 2024

Stochastic Newton Proximal Extragradient Method.
CoRR, 2024

In-Context Learning with Transformers: Softmax Attention Adapts to Function Lipschitzness.
CoRR, 2024

An Accelerated Gradient Method for Simple Bilevel Optimization with Convex Lower-level Problem.
CoRR, 2024

Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Krylov Cubic Regularized Newton: A Subspace Second-Order Method with Dimension-Free Convergence Rate.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2024

2023
Provably Private Distributed Averaging Consensus: An Information-Theoretic Approach.
IEEE Trans. Inf. Theory, November, 2023

Straggler-Resilient Personalized Federated Learning.
Trans. Mach. Learn. Res., 2023

Non-asymptotic superlinear convergence of standard quasi-Newton methods.
Math. Program., 2023

Limited-Memory Greedy Quasi-Newton Method with Non-asymptotic Superlinear Convergence Rate.
CoRR, 2023

Greedy Pruning with Group Lasso Provably Generalizes for Matrix Sensing and Neural Networks with Quadratic Activations.
CoRR, 2023

Greedy Pruning with Group Lasso Provably Generalizes for Matrix Sensing.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Accelerated Quasi-Newton Proximal Extragradient: Faster Rate for Smooth Convex Optimization.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Projection-Free Methods for Stochastic Simple Bilevel Optimization with Convex Lower-level Problem.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Network Adaptive Federated Learning: Congestion and Lossy Compression.
Proceedings of the IEEE INFOCOM 2023, 2023

Meta-Learning for Image-Guided Millimeter-Wave Beam Selection in Unseen Environments.
Proceedings of the IEEE International Conference on Acoustics, 2023

InfoNCE Loss Provably Learns Cluster-Preserving Representations.
Proceedings of the Thirty Sixth Annual Conference on Learning Theory, 2023

Online Learning Guided Curvature Approximation: A Quasi-Newton Method with Global Non-Asymptotic Superlinear Convergence.
Proceedings of the Thirty Sixth Annual Conference on Learning Theory, 2023

A Conditional Gradient-based Method for Simple Bilevel Optimization with Convex Lower-level Problem.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023

2022
Straggler-Resilient Federated Learning: Leveraging the Interplay Between Statistical Accuracy and System Heterogeneity.
IEEE J. Sel. Areas Inf. Theory, 2022

Generalized Frank-Wolfe Algorithm for Bilevel Optimization.
CoRR, 2022

Generalized Optimistic Methods for Convex-Concave Saddle Point Problems.
CoRR, 2022

Future gradient descent for adapting the temporal shifting data distribution in online recommendation systems.
Proceedings of the Uncertainty in Artificial Intelligence, 2022

FedAvg with Fine Tuning: Local Updates Lead to Representation Learning.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Sharpened Quasi-Newton Methods: Faster Superlinear Rate and Larger Local Convergence Neighborhood.
Proceedings of the International Conference on Machine Learning, 2022

MAML and ANIL Provably Learn Representations.
Proceedings of the International Conference on Machine Learning, 2022

Adaptive Node Participation for Straggler-Resilient Federated Learning.
Proceedings of the IEEE International Conference on Acoustics, 2022

The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance.
Proceedings of the Conference on Learning Theory, 2-5 July 2022, London, UK., 2022

How Does the Task Landscape Affect MAML Performance?
Proceedings of the Conference on Lifelong Learning Agents, 2022


Minimax Optimization: The Case of Convex-Submodular.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2022

2021
Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive Sample Size Approach.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Generalization of Model-Agnostic Meta-Learning Algorithms: Recurring and Unseen Tasks.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

On the Convergence Theory of Debiased Model-Agnostic Meta-Reinforcement Learning.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Exploiting Shared Representations for Personalized Federated Learning.
Proceedings of the 38th International Conference on Machine Learning, 2021

Federated Learning with Compression: Unified Analysis and Sharp Guarantees.
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021

2020
High-Dimensional Nonconvex Stochastic Optimization by Doubly Stochastic Successive Convex Approximation.
IEEE Trans. Signal Process., 2020

Convergence Rate of 풪(1/k) for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problems.
SIAM J. Optim., 2020

Stochastic Conditional Gradient++: (Non)Convex Minimization and Continuous Submodular Maximization.
SIAM J. Optim., 2020

Stochastic Quasi-Newton Methods.
Proc. IEEE, 2020

A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning.
J. Mach. Learn. Res., 2020

Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization.
J. Mach. Learn. Res., 2020

Why Does MAML Outperform ERM? An Optimization Perspective.
CoRR, 2020

Safe Learning under Uncertain Objectives and Constraints.
CoRR, 2020

Quantized Push-sum for Gossip and Decentralized Optimization over Directed Graphs.
CoRR, 2020

Personalized Federated Learning: A Meta-Learning Approach.
CoRR, 2020

Provably Convergent Policy Gradient Methods for Model-Agnostic Meta-Reinforcement Learning.
CoRR, 2020

Distribution-Agnostic Model-Agnostic Meta-Learning.
CoRR, 2020

Second Order Optimality in Decentralized Non-Convex Optimization via Perturbed Gradient Tracking.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Task-Robust Model-Agnostic Meta-Learning.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Submodular Meta-Learning.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Quantized Decentralized Stochastic Learning over Directed Graphs.
Proceedings of the 37th International Conference on Machine Learning, 2020

One Sample Stochastic Frank-Wolfe.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

Quantized Frank-Wolfe: Faster Optimization, Lower Communication, and Projection Free.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

DAve-QN: A Distributed Averaged Quasi-Newton Method with Local Superlinear Convergence Rate.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

Efficient Distributed Hessian Free Algorithm for Large-scale Empirical Risk Minimization via Accumulating Sample Strategy.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

2019
An Exact Quantized Decentralized Gradient Descent Algorithm.
IEEE Trans. Signal Process., 2019

A Primal-Dual Quasi-Newton Method for Exact Consensus Optimization.
IEEE Trans. Signal Process., 2019

A Newton-Based Method for Nonconvex Optimization with Fast Evasion of Saddle Points.
SIAM J. Optim., 2019

A Decentralized Proximal Point-type Method for Saddle Point Problems.
CoRR, 2019

Proximal Point Approximations Achieving a Convergence Rate of O(1/k) for Smooth Convex-Concave Saddle Point Problems: Optimistic Gradient and Extra-gradient Methods.
CoRR, 2019

Stochastic Conditional Gradient++.
CoRR, 2019

Quantized Frank-Wolfe: Communication-Efficient Distributed Optimization.
CoRR, 2019

Robust and Communication-Efficient Collaborative Learning.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Stochastic Continuous Greedy ++: When Upper and Lower Bounds Match.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

Achieving Acceleration in Distributed Optimization via Direct Discretization of the Heavy-Ball ODE.
Proceedings of the 2019 American Control Conference, 2019

Efficient Nonconvex Empirical Risk Minimization via Adaptive Sample Size Methods.
Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, 2019

2018
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate.
SIAM J. Optim., 2018

IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate.
SIAM J. Optim., 2018

Direct Runge-Kutta Discretization Achieves Acceleration.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Escaping Saddle Points in Constrained Optimization.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication.
Proceedings of the 35th International Conference on Machine Learning, 2018

Decentralized Submodular Maximization: Bridging Discrete and Continuous Settings.
Proceedings of the 35th International Conference on Machine Learning, 2018

Parallel Stochastic Successive Convex Approximation Method for Large-Scale Dictionary Learning.
Proceedings of the 2018 IEEE International Conference on Acoustics, 2018

Quantized Decentralized Consensus Optimization.
Proceedings of the 57th IEEE Conference on Decision and Control, 2018

A Newton Method for Faster Navigation in Cluttered Environments.
Proceedings of the 57th IEEE Conference on Decision and Control, 2018

Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2018

Large Scale Empirical Risk Minimization via Truncated Adaptive Newton Method.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2018

2017
Network Newton Distributed Optimization Methods.
IEEE Trans. Signal Process., 2017

Decentralized Quasi-Newton Methods.
IEEE Trans. Signal Process., 2017

Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation.
IEEE Trans. Signal Process., 2017

Decentralized Prediction-Correction Methods for Networked Time-Varying Convex Optimization.
IEEE Trans. Autom. Control., 2017

First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization.
Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 2017

Large-scale nonconvex stochastic optimization by Doubly Stochastic Successive Convex approximation.
Proceedings of the 2017 IEEE International Conference on Acoustics, 2017

A Diagonal-Augmented quasi-Newton method with application to factorization machines.
Proceedings of the 2017 IEEE International Conference on Acoustics, 2017

A double incremental aggregated gradient method with linear convergence rate for large-scale optimization.
Proceedings of the 2017 IEEE International Conference on Acoustics, 2017

An incremental quasi-Newton method with a local superlinear convergence rate.
Proceedings of the 2017 IEEE International Conference on Acoustics, 2017

A primal-dual Quasi-Newton method for consensus optimization.
Proceedings of the 51st Asilomar Conference on Signals, Systems, and Computers, 2017

2016
A Class of Prediction-Correction Methods for Time-Varying Convex Optimization.
IEEE Trans. Signal Process., 2016

DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers.
IEEE Trans. Signal Process., 2016

A Decentralized Second-Order Method with Exact Linear Convergence Rate for Consensus Optimization.
IEEE Trans. Signal Inf. Process. over Networks, 2016

DSA: Decentralized Double Stochastic Averaging Gradient Algorithm.
J. Mach. Learn. Res., 2016

Adaptive Newton Method for Empirical Risk Minimization to Statistical Accuracy.
CoRR, 2016

A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning.
CoRR, 2016

Adaptive Newton Method for Empirical Risk Minimization to Statistical Accuracy.
Proceedings of the Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, 2016

Decentralized constrained consensus optimization with primal dual splitting projection.
Proceedings of the 2016 IEEE Global Conference on Signal and Information Processing, 2016

An asynchronous Quasi-Newton method for consensus optimization.
Proceedings of the 2016 IEEE Global Conference on Signal and Information Processing, 2016

A data-driven approach to stochastic network optimization.
Proceedings of the 2016 IEEE Global Conference on Signal and Information Processing, 2016

A Quasi-newton prediction-correction method for decentralized dynamic convex optimization.
Proceedings of the 15th European Control Conference, 2016

A decentralized Second-Order Method for Dynamic Optimization.
Proceedings of the 55th IEEE Conference on Decision and Control, 2016

Online optimization in dynamic environments: Improved regret rates for strongly convex problems.
Proceedings of the 55th IEEE Conference on Decision and Control, 2016

A decentralized quasi-Newton method for dual formulations of consensus optimization.
Proceedings of the 55th IEEE Conference on Decision and Control, 2016

Doubly random parallel stochastic methods for large scale learning.
Proceedings of the 2016 American Control Conference, 2016

ESOM: Exact second-order method for consensus optimization.
Proceedings of the 50th Asilomar Conference on Signals, Systems and Computers, 2016

Doubly stochastic algorithms for large-scale optimization.
Proceedings of the 50th Asilomar Conference on Signals, Systems and Computers, 2016

2015
Global convergence of online limited memory BFGS.
J. Mach. Learn. Res., 2015

An approximate Newton method for distributed optimization.
Proceedings of the 2015 IEEE International Conference on Acoustics, 2015

Decentralized quadratically approximated alternating direction method of multipliers.
Proceedings of the 2015 IEEE Global Conference on Signal and Information Processing, 2015

Target tracking with dynamic convex optimization.
Proceedings of the 2015 IEEE Global Conference on Signal and Information Processing, 2015

A decentralized prediction-correction method for networked time-varying convex optimization.
Proceedings of the 6th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, 2015

Prediction-correction methods for time-varying convex optimization.
Proceedings of the 49th Asilomar Conference on Signals, Systems and Computers, 2015

Decentralized double stochastic averaging gradient.
Proceedings of the 49th Asilomar Conference on Signals, Systems and Computers, 2015

2014
RES: Regularized Stochastic BFGS Algorithm.
IEEE Trans. Signal Process., 2014

A quasi-Newton method for large scale support vector machines.
Proceedings of the IEEE International Conference on Acoustics, 2014

Network Newton.
Proceedings of the 48th Asilomar Conference on Signals, Systems and Computers, 2014

2013
A dual stochastic DFP algorithm for optimal resource allocation in wireless systems.
Proceedings of the 14th IEEE Workshop on Signal Processing Advances in Wireless Communications, 2013

Regularized stochastic BFGS algorithm.
Proceedings of the IEEE Global Conference on Signal and Information Processing, 2013


  Loading...