Yuting Wei

Orcid: 0000-0003-1488-4647

Affiliations:
  • University of Pennsylvania, Wharton School, Department of Statistics and Data Science, Philadelphia, PA, USA
  • Carnegie Mellon University, Department of Statistics and Data Science, Pittsburgh, PA, USA (2019 - 2021)
  • Stanford University, Statistics Department. Stanford, CA, USA (2018 - 2019)
  • University of California at Berkeley, Department of Statistics, Berkeley, CA, USA (PhD 2018)


According to our database1, Yuting Wei authored at least 44 papers between 2016 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
High-Probability Sample Complexities for Policy Evaluation With Linear Function Approximation.
IEEE Trans. Inf. Theory, August, 2024

Fast Policy Extragradient Methods for Competitive Games with Entropy Regularization.
J. Mach. Learn. Res., 2024

Breaking the Sample Size Barrier in Model-Based Reinforcement Learning with a Generative Model.
Oper. Res., 2024

Is Q-Learning Minimax Optimal? A Tight Sample Complexity Analysis.
Oper. Res., 2024

Hybrid Reinforcement Learning Breaks Sample Size Barriers in Linear MDPs.
CoRR, 2024

A Sharp Convergence Theory for The Probability Flow ODEs of Diffusion Models.
CoRR, 2024

Towards a mathematical theory for consistency training in diffusion models.
CoRR, 2024

A non-asymptotic distributional theory of approximate message passing for sparse and robust regression.
CoRR, 2024

Theoretical insights for diffusion guidance: A case study for Gaussian mixture models.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Accelerating Convergence of Score-Based Diffusion Models, Provably.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Towards Non-Asymptotic Convergence for Diffusion-Based Generative Models.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

2023
Softmax policy gradient methods can take exponential time to converge.
Math. Program., 2023

Federated Natural Policy Gradient Methods for Multi-task Reinforcement Learning.
CoRR, 2023

Towards Faster Non-Asymptotic Convergence for Diffusion-Based Generative Models.
CoRR, 2023

Sharp high-probability sample complexities for policy evaluation with linear function approximation.
CoRR, 2023

Approximate message passing from random initialization with applications to ℤ<sub>2</sub> synchronization.
CoRR, 2023

The Curious Price of Distributional Robustness in Reinforcement Learning with a Generative Model.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

2022
Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction.
IEEE Trans. Inf. Theory, 2022

Fast Global Convergence of Natural Policy Gradient Methods with Entropy Regularization.
Oper. Res., 2022

Minimax-Optimal Multi-Agent RL in Zero-Sum Markov Games With a Generative Model.
CoRR, 2022

A Non-Asymptotic Framework for Approximate Message Passing in Spiked Models.
CoRR, 2022

Mitigating multiple descents: A model-agnostic framework for risk monotonization.
CoRR, 2022

Settling the Sample Complexity of Model-Based Offline Reinforcement Learning.
CoRR, 2022

Minimax-Optimal Multi-Agent RL in Markov Games With a Generative Model.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Pessimistic Q-Learning for Offline Reinforcement Learning: Towards Optimal Sample Complexity.
Proceedings of the International Conference on Machine Learning, 2022

2021
Tackling Small Eigen-Gaps: Fine-Grained Eigenvector Estimation and Inference Under Heteroscedastic Noise.
IEEE Trans. Inf. Theory, 2021

Minimum 𝓁<sub>1</sub>-norm interpolators: Precise asymptotics and multiple descent.
CoRR, 2021

Sample-Efficient Reinforcement Learning Is Feasible for Linearly Realizable MDPs with Limited Revisiting.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Tightening the Dependence on Horizon in the Sample Complexity of Q-Learning.
Proceedings of the 38th International Conference on Machine Learning, 2021

Softmax Policy Gradient Methods Can Take Exponential Time to Converge.
Proceedings of the Conference on Learning Theory, 2021

Uniform Consistency of Cross-Validation Estimators for High-Dimensional Ridge Regression.
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, 2021

Debiasing Evaluations That Are Biased by Evaluations.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
The Local Geometry of Testing in Ellipses: Tight Control via Localized Kolmogorov Widths.
IEEE Trans. Inf. Theory, 2020

The Lasso with general Gaussian designs with applications to hypothesis testing.
CoRR, 2020

Sharp Statistical Guarantees for Adversarially Robust Gaussian Classification.
CoRR, 2020

Inference for linear forms of eigenvectors under minimal eigenvalue separation: Asymmetry and heteroscedasticity.
CoRR, 2020

Randomized tests for high-dimensional regression: A more efficient and powerful solution.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Breaking the Sample Size Barrier in Model-Based Reinforcement Learning with a Generative Model.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Sharp Statistical Guaratees for Adversarially Robust Gaussian Classification.
Proceedings of the 37th International Conference on Machine Learning, 2020

2019
Early Stopping for Kernel Boosting Algorithms: A General Analysis With Localized Complexities.
IEEE Trans. Inf. Theory, 2019

2018
From Gauss to Kolmogorov: Localized Measures of Complexity for Ellipses.
CoRR, 2018

2017
The local geometry of testing in ellipses: Tight control via localized Kolomogorov widths.
CoRR, 2017

The geometry of hypothesis testing over convex cones: Generalized likelihood tests and minimax radii.
CoRR, 2017

2016
Sharp minimax bounds for testing discrete monotone distributions.
Proceedings of the IEEE International Symposium on Information Theory, 2016


  Loading...