Katya Scheinberg

Orcid: 0000-0003-3547-1841

According to our database1, Katya Scheinberg authored at least 65 papers between 1996 and 2023.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2023
Stochastic Adaptive Regularization Method with Cubics: a High Probability Complexity Bound.
Proceedings of the Winter Simulation Conference, 2023

2022
Finite Difference Gradient Approximation: To Randomize or Not?
INFORMS J. Comput., 2022

A Theoretical and Empirical Comparison of Gradient Approximations in Derivative-Free Optimization.
Found. Comput. Math., 2022

Finding Optimal Policy for Queueing Models: New Parameterization.
CoRR, 2022

Nesterov Accelerated Shuffling Gradient Method for Convex Optimization.
Proceedings of the International Conference on Machine Learning, 2022

2021
Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise.
SIAM J. Optim., 2021

Inexact SARAH algorithm for stochastic optimization.
Optim. Methods Softw., 2021

Optimal decision trees for categorical data via integer programming.
J. Glob. Optim., 2021

High Probability Complexity Bounds for Line Search Based on Stochastic Oracles.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

2020
Adaptive Stochastic Optimization: A Framework for Analyzing Stochastic Optimization Algorithms.
IEEE Signal Process. Mag., 2020

A Stochastic Line Search Method with Expected Complexity Analysis.
SIAM J. Optim., 2020

Adaptive Stochastic Optimization.
CoRR, 2020

2019
A Stochastic Trust Region Algorithm Based on Careful Step Normalization.
INFORMS J. Optim., July, 2019

Convergence Rate Analysis of a Stochastic Trust-Region Method via Supermartingales.
INFORMS J. Optim., April, 2019

New Convergence Aspects of Stochastic Gradient Algorithms.
J. Mach. Learn. Res., 2019

A Novel Smoothed Loss and Penalty Function for Noncrossing Composite Quantile Estimation via Deep Neural Networks.
CoRR, 2019

Feature Engineering and Forecasting via Integration of Derivative-free Optimization and Ensemble of Sequence-to-sequence Networks: Renewable Energy Case Studies.
CoRR, 2019

Novel and Efficient Approximations for Zero-One Loss of Linear Classifiers.
CoRR, 2019

2018
Stochastic optimization using a trust-region method and random models.
Math. Program., 2018

Global convergence rate analysis of unconstrained optimization methods based on probabilistic models.
Math. Program., 2018

Directly and Efficiently Optimizing Prediction Error and AUC of Linear Classifiers.
CoRR, 2018

When Does Stochastic Gradient Algorithm Work Well?
CoRR, 2018

Proximal quasi-Newton methods for regularized convex optimization with linear and accelerated sublinear convergence rates.
Comput. Optim. Appl., 2018

SGD and Hogwild! Convergence Without the Bounded Gradients Assumption.
Proceedings of the 35th International Conference on Machine Learning, 2018

2017
On the construction of quadratic models for derivative-free trust-region algorithms.
EURO J. Comput. Optim., 2017

Stochastic Recursive Gradient Algorithm for Nonconvex Optimization.
CoRR, 2017

Black-Box Optimization in Machine Learning with Trust Region Based Derivative Free Algorithm.
CoRR, 2017

Optimization Methods for Supervised Machine Learning: From Linear Models to Deep Learning.
CoRR, 2017

SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient.
Proceedings of the 34th International Conference on Machine Learning, 2017

A Novel l0-Constrained Gaussian Graphical Model for Anomaly Localization.
Proceedings of the 2017 IEEE International Conference on Data Mining Workshops, 2017

An Empirical Analysis of Constrained Support Vector Quantile Regression for Nonparametric Probabilistic Forecasting of Wind Power.
Proceedings of the Workshops of the The Thirty-First AAAI Conference on Artificial Intelligence, 2017

2016
Practical inexact proximal quasi-Newton method with global complexity analysis.
Math. Program., 2016

Optimal Generalized Decision Trees via Integer Programming.
CoRR, 2016

Proximal Quasi-Newton Methods for Convex Optimization.
CoRR, 2016

2015
A scalable solution for group feature selection.
Proceedings of the 2015 IEEE International Conference on Big Data (IEEE BigData 2015), Santa Clara, CA, USA, October 29, 2015

Superposition of protein structures using electrostatic isopotentials.
Proceedings of the 2015 IEEE International Conference on Bioinformatics and Biomedicine, 2015

2014
Convergence of Trust-Region Methods Based on Probabilistic Models.
SIAM J. Optim., 2014

Fast First-Order Methods for Composite Convex Optimization with Backtracking.
Found. Comput. Math., 2014

2013
Efficient block-coordinate descent algorithms for the Group Lasso.
Math. Program. Comput., 2013

Fast alternating linearization methods for minimizing the sum of two convex functions.
Math. Program., 2013

On partial sparse recovery
CoRR, 2013

Efficiently Using Second Order Information in Large l1 Regularization Problems
CoRR, 2013

Complexity of Inexact Proximal Newton methods.
CoRR, 2013

2012
Computation of sparse low degree interpolating polynomials and their application to derivative-free optimization.
Math. Program., 2012

Aligning ligand binding cavities by optimizing superposed volume.
Proceedings of the 2012 IEEE International Conference on Bioinformatics and Biomedicine, 2012

2010
A Derivative-Free Algorithm for Least-Squares Minimization.
SIAM J. Optim., 2010

Self-Correcting Geometry in Model-Based Algorithms for Derivative-Free Unconstrained Optimization.
SIAM J. Optim., 2010

Learning Sparse Gaussian Markov Networks Using a Greedy Coordinate Ascent Approach.
Proceedings of the Machine Learning and Knowledge Discovery in Databases, 2010

Sparse Inverse Covariance Selection via Alternating Linearization Methods.
Proceedings of the Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010, 2010

Sparse Markov net learning with priors on regularization parameters.
Proceedings of the International Symposium on Artificial Intelligence and Mathematics, 2010

2009
Global Convergence of General Derivative-Free Trust-Region Algorithms to First- and Second-Order Critical Points.
SIAM J. Optim., 2009

Map approach to learning sparse Gaussian Markov networks.
Proceedings of the IEEE International Conference on Acoustics, 2009

Introduction to Derivative-Free Optimization.
MPS-SIAM series on optimization 8, SIAM, ISBN: 978-0-89871-668-9, 2009

2008
PREFACESpecial section on mathematical programming in data mining and machine learning.
Optim. Methods Softw., 2008

Geometry of interpolation sets in derivative free optimization.
Math. Program., 2008

2006
An Efficient Implementation of an Active Set Method for SVMs.
J. Mach. Learn. Res., 2006

IBM Research TRECVID-2006 Video Retrieval System.
Proceedings of the 2006 TREC Video Retrieval Evaluation, 2006

2005
Product-form Cholesky factorization in interior point methods for second-order cone programming.
Math. Program., 2005

2004
A product-form Cholesky factorization method for handling dense columns in interior point methods for linear programming.
Math. Program., 2004

2001
Efficient SVM Training Using Low-Rank Kernel Representations.
J. Mach. Learn. Res., 2001

Incremental Learning and Selective Sampling via Parametric Optimization Framework for SVM.
Proceedings of the Advances in Neural Information Processing Systems 14 [Neural Information Processing Systems: Natural and Synthetic, 2001

1999
A Modified Barrier-Augmented Lagrangian Method for Constrained Minimization.
Comput. Optim. Appl., 1999

1998
Interior Point Trajectories in Semidefinite Programming.
SIAM J. Optim., 1998

1997
Recent progress in unconstrained nonlinear optimization without derivatives.
Math. Program., 1997

1996
Extension of Karmarkar's algorithm onto convex quadratically constrained quadratic problems.
Math. Program., 1996


  Loading...