Martin Takác

Orcid: 0000-0001-7455-2025

Affiliations:
  • Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE
  • Lehigh University, Bethlehem, PA, USA (former)


According to our database1, Martin Takác authored at least 129 papers between 2011 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Preconditioning meets biased compression for efficient distributed optimization.
Comput. Manag. Sci., June, 2024

Stochastic Gradient Methods with Preconditioned Updates.
J. Optim. Theory Appl., May, 2024

Random-reshuffled SARAH does not need full gradient computations.
Optim. Lett., April, 2024

PaDPaF: Partial Disentanglement with Partially-Federated GANs.
Trans. Mach. Learn. Res., 2024

Inexact tensor methods and their application to stochastic convex optimization.
Optim. Methods Softw., 2024

ψDAG: Projected Stochastic Approximation Iteration for DAG Structure Learning.
CoRR, 2024

Enhance Hyperbolic Representation Learning via Second-order Pooling.
CoRR, 2024

Collaborative and Efficient Personalization with Mixtures of Adaptors.
CoRR, 2024

FedPeWS: Personalized Warmup via Subnetworks for Enhanced Heterogeneous Federated Learning.
CoRR, 2024

Methods for Convex (L<sub>0</sub>,L<sub>1</sub>)-Smooth Optimization: Clipping, Acceleration, and Adaptivity.
CoRR, 2024

MirrorCheck: Efficient Adversarial Defense for Vision-Language Models.
CoRR, 2024

Gradient Clipping Improves AdaGrad when the Noise Is Heavy-Tailed.
CoRR, 2024

Local Methods with Adaptivity via Scaling.
CoRR, 2024

Self-Guiding Exploration for Combinatorial Problems.
CoRR, 2024

Enhancing Policy Gradient with the Polyak Step-Size Adaption.
CoRR, 2024

Generalized Policy Learning for Smart Grids: FL TRPO Approach.
CoRR, 2024

Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad.
CoRR, 2024

Reinforcement Learning for Solving Stochastic Vehicle Routing Problem with Time Windows.
CoRR, 2024

AdaBatchGrad: Combining Adaptive Batch Size and Adaptive Step Size.
CoRR, 2024

Federated Learning Can Find Friends That Are Beneficial.
CoRR, 2024

Dirichlet-based Uncertainty Quantification for Personalized Federated Learning with Improved Posterior Networks.
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, 2024

Advancing the Lower Bounds: an Accelerated, Stochastic, Second-order Method with Optimal Adaptation to Inexactness.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Efficient Conformal Prediction under Data Heterogeneity.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2024

Robustly Train Normalizing Flows via KL Divergence Regularization.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods.
Trans. Mach. Learn. Res., 2023

SANIA: Polyak-type Optimization Framework Leads to Scale Invariant Stochastic Algorithms.
CoRR, 2023

Stochastic Gradient Descent with Preconditioned Polyak Step-size.
CoRR, 2023

MAHTM: A Multi-Agent Framework for Hierarchical Transactive Microgrids.
CoRR, 2023

In Quest of Ground Truth: Learning Confident Models and Estimating Uncertainty in the Presence of Annotator Noise.
CoRR, 2023

Reinforcement Learning Approach to Stochastic Vehicle Routing Problem With Correlated Demands.
IEEE Access, 2023

Regularization of the Policy Updates for Stabilizing Mean Field Games.
Proceedings of the Advances in Knowledge Discovery and Data Mining, 2023

Byzantine-Tolerant Methods for Distributed Variational Inequalities.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

On the Study of Curriculum Learning for Inferring Dispatching Policies on the Job Shop Scheduling.
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, 2023

FRESCO: Federated Reinforcement Energy System for Cooperative Optimization.
Proceedings of the First Tiny Papers Track at ICLR 2023, 2023

SP2 : A Second Order Stochastic Polyak Method.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Algorithm for Constrained Markov Decision Process with Linear Convergence.
Proceedings of the International Conference on Artificial Intelligence and Statistics, 2023

Reinforcement Learning for Solving Stochastic Vehicle Routing Problem.
Proceedings of the Asian Conference on Machine Learning, 2023

2022
Quasi-Newton methods for machine learning: forget the past, just sample.
Optim. Methods Softw., 2022

A Deep Q-Network for the Beer Game: Deep Reinforcement Learning for Inventory Optimization.
Manuf. Serv. Oper. Manag., 2022

Distributed Learning With Sparsified Gradient Differences.
IEEE J. Sel. Top. Signal Process., 2022

Decentralized personalized federated learning: Lower bounds and optimal algorithm for all personalization modes.
EURO J. Comput. Optim., 2022

Partial Disentanglement with Partially-Federated GANs (PaDPaF).
CoRR, 2022

Optimal Power Flow Pursuit in the Alternating Current Model.
CoRR, 2022

Gradient Descent and the Power Method: Exploiting their connection to find the leftmost eigen-pair and escape saddle points.
CoRR, 2022

FLECS-CGD: A Federated Learning Second-Order Framework via Compression and Sketching with Compressed Gradient Differences.
CoRR, 2022

On Scaled Methods for Saddle Point Problems.
CoRR, 2022

Learning to generalize Dispatching rules on the Job Shop Scheduling.
CoRR, 2022

Learning to Control under Time-Varying Environment.
CoRR, 2022

Robustness Analysis of Classification Using Recurrent Neural Networks with Perturbed Sequential Input.
CoRR, 2022

A Damped Newton Method Achieves Global $\mathcal O \left(\frac{1}{k^2}\right)$ and Local Quadratic Convergence Rate.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Suppressing Poisoning Attacks on Federated Learning for Medical Imaging.
Proceedings of the Medical Image Computing and Computer Assisted Intervention - MICCAI 2022, 2022

The power of first-order smooth optimization for black-box non-smooth problems.
Proceedings of the International Conference on Machine Learning, 2022

Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Towards Practical Large Scale Non-Linear Semi-Supervised Learning with Balancing Constraints.
Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2022

2021
Inexact SARAH algorithm for stochastic optimization.
Optim. Methods Softw., 2021

An accelerated communication-efficient primal-dual optimization framework for structured machine learning.
Optim. Methods Softw., 2021

AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods.
CoRR, 2021

Fast and safe: accelerated gradient methods with optimality certificates and underestimate sequences.
Comput. Optim. Appl., 2021

Active metric learning for supervised classification.
Comput. Chem. Eng., 2021

Classification-Aware Path Planning of Network of Robots.
Proceedings of the Distributed Autonomous Robotic Systems - 15th International Symposium, 2021

Improving Text-to-Image Synthesis Using Contrastive Learning.
Proceedings of the 32nd British Machine Vision Conference 2021, 2021

SONIA: A Symmetric Blockwise Truncated Optimization Algorithm.
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021

2020
Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory.
SIAM J. Matrix Anal. Appl., 2020

A robust multi-batch L-BFGS method for machine learning.
Optim. Methods Softw., 2020

A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning.
J. Mach. Learn. Res., 2020

Applying deep learning to the newsvendor problem.
IISE Trans., 2020

Reinforcement Learning based Multi-Robot Classification via Scalable Communication Structure.
CoRR, 2020

DynNet: Physics-based neural architecture design for linear and nonlinear structural response modeling and prediction.
CoRR, 2020

Constrained Combinatorial Optimization with Reinforcement Learning.
CoRR, 2020

Structural sensing with deep learning: Strain estimation from acceleration data for fatigue assessment.
Comput. Aided Civ. Infrastructure Eng., 2020

Scaling Up Quasi-newton Algorithms: Communication Efficient Distributed SR1.
Proceedings of the Machine Learning, Optimization, and Data Science, 2020

Finite Difference Neural Networks: Fast Prediction of Partial Differential Equations.
Proceedings of the 19th IEEE International Conference on Machine Learning and Applications, 2020

Efficient Distributed Hessian Free Algorithm for Large-scale Empirical Risk Minimization via Accumulating Sample Strategy.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, 2020

2019
New Convergence Aspects of Stochastic Gradient Algorithms.
J. Mach. Learn. Res., 2019

Convolutional Neural Network Approach for Robust Structural Damage Detection and Localization.
J. Comput. Civ. Eng., 2019

Distributed Fixed Point Methods with Compressed Iterates.
CoRR, 2019

FD-Net with Auxiliary Time Steps: Fast Prediction of PDEs using Hessian-Free Trust-Region Methods.
CoRR, 2019

A Layered Architecture for Active Perception: Image Classification using Deep Reinforcement Learning.
CoRR, 2019

Don't Forget Your Teacher: A Corrective Reinforcement Learning Framework.
CoRR, 2019

Quasi-Newton Methods for Deep Learning: Forget the Past, Just Sample.
CoRR, 2019

Distributed Learning with Compressed Gradient Differences.
CoRR, 2019

Multi-Agent Image Classification via Reinforcement Learning.
Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2019

Entropy-Penalized Semidefinite Programming.
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, 2019

2018
On the complexity of parallel coordinate descent.
Optim. Methods Softw., 2018

Dual Free Adaptive Minibatch SDCA for Empirical Risk Minimization.
Frontiers Appl. Math. Stat., 2018

On the Acceleration of L-BFGS with Second-Order Information and Stochastic Batches.
CoRR, 2018

Active Metric Learning for Supervised Classification.
CoRR, 2018

Deep Reinforcement Learning for Solving the Vehicle Routing Problem.
CoRR, 2018

Matrix Completion Under Interval Uncertainty: Highlights.
Proceedings of the Machine Learning and Knowledge Discovery in Databases, 2018

Reinforcement Learning for Solving the Vehicle Routing Problem.
Proceedings of the Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, 2018

SGD and Hogwild! Convergence Without the Bounded Gradients Assumption.
Proceedings of the 35th International Conference on Machine Learning, 2018

2017
Hybrid Methods in Solving Alternating-Current Optimal Power Flows.
IEEE Trans. Smart Grid, 2017

A low-rank coordinate-descent algorithm for semidefinite programming relaxations of optimal power flow.
Optim. Methods Softw., 2017

Distributed optimization with arbitrary local solvers.
Optim. Methods Softw., 2017

CoCoA: A General Framework for Communication-Efficient Distributed Optimization.
J. Mach. Learn. Res., 2017

Matrix completion under interval uncertainty.
Eur. J. Oper. Res., 2017

Underestimate Sequences via Quadratic Averaging.
CoRR, 2017

Stock-out Prediction in Multi-echelon Networks.
CoRR, 2017

A Deep Q-Network for the Beer Game with Partial Information.
CoRR, 2017

Stochastic Recursive Gradient Algorithm for Nonconvex Optimization.
CoRR, 2017

SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient.
Proceedings of the 34th International Conference on Machine Learning, 2017

Distributed Inexact Damped Newton Method: Data Partitioning and Work-Balancing.
Proceedings of the Workshops of the The Thirty-First AAAI Conference on Artificial Intelligence, 2017

Distributed Hessian-Free Optimization for Deep Neural Network.
Proceedings of the Workshops of the The Thirty-First AAAI Conference on Artificial Intelligence, 2017

2016
On optimal probabilities in stochastic coordinate descent methods.
Optim. Lett., 2016

Parallel coordinate descent methods for big data optimization.
Math. Program., 2016

Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting.
IEEE J. Sel. Top. Signal Process., 2016

Distributed Coordinate Descent Method for Learning with Big Data.
J. Mach. Learn. Res., 2016

Linear Convergence of Randomized Feasible Descent Methods Under the Weak Strong Convexity Assumption.
J. Mach. Learn. Res., 2016

Distributed Inexact Damped Newton Method: Data Partitioning and Load-Balancing.
CoRR, 2016

Projected Semi-Stochastic Gradient Descent Method with Mini-Batch Scheme under Weak Strong Convexity Assumption.
CoRR, 2016

Large Scale Distributed Hessian-Free Optimization for Deep Neural Network.
CoRR, 2016

A Multi-Batch L-BFGS Method for Machine Learning.
Proceedings of the Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, 2016

SDNA: Stochastic Dual Newton Ascent for Empirical Risk Minimization.
Proceedings of the 33nd International Conference on Machine Learning, 2016

Primal-Dual Rates and Certificates.
Proceedings of the 33nd International Conference on Machine Learning, 2016

2015
Distributed Mini-Batch SDCA.
CoRR, 2015

Linear Convergence of the Randomized Feasible Descent Method Under the Weak Strong Convexity Assumption.
CoRR, 2015

Partitioning Data on Features or Samples in Communication-Efficient Distributed Optimization?
CoRR, 2015

Dual Free SDCA for Empirical Risk Minimization with Adaptive Probabilities.
CoRR, 2015

Adding vs. Averaging in Distributed Primal-Dual Optimization.
Proceedings of the 32nd International Conference on Machine Learning, 2015

2014
Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function.
Math. Program., 2014

Inequality-Constrained Matrix Completion: Adding the Obvious Helps!
CoRR, 2014

mS2GD: Mini-Batch Semi-Stochastic Gradient Descent in the Proximal Setting.
CoRR, 2014

Communication-Efficient Distributed Dual Coordinate Ascent.
Proceedings of the Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, 2014

Fast distributed coordinate descent for non-strongly convex losses.
Proceedings of the IEEE International Workshop on Machine Learning for Signal Processing, 2014

2013
TOP-SPIN: TOPic discovery via Sparse Principal component INterference.
CoRR, 2013

Mini-Batch Primal and Dual Methods for SVMs.
Proceedings of the 30th International Conference on Machine Learning, 2013

2012
Alternating Maximization: Unifying Framework for 8 Sparse PCA Formulations and Efficient Parallel Codes
CoRR, 2012

2011
Efficient Serial and Parallel Coordinate Descent Methods for Huge-Scale Truss Topology Design.
Proceedings of the Operations Research Proceedings 2011, Selected Papers of the International Conference on Operations Research (OR 2011), August 30, 2011


  Loading...