Yaqi Duan

Orcid: 0000-0002-2392-5642

According to our database1, Yaqi Duan authored at least 23 papers between 2019 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Proteomic Stratification of Prognosis and Treatment Options for Small Cell Lung Cancer.
Genom. Proteom. Bioinform., 2024

Taming "data-hungry" reinforcement learning? Stability in continuous state-action spaces.
CoRR, 2024

2023
PU-Flow: A Point Cloud Upsampling Network With Normalizing Flows.
IEEE Trans. Vis. Comput. Graph., December, 2023

Learning Good State and Action Representations for Markov Decision Process via Tensor Decomposition.
J. Mach. Learn. Res., 2023

A finite-sample analysis of multi-step temporal difference estimates.
Proceedings of the Learning for Dynamics and Control Conference, 2023

Invertible Residual Neural Networks with Conditional Injector and Interpolator for Point Cloud Upsampling.
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, 2023

2022
Policy Evaluation in Batch Reinforcement Learning
PhD thesis, 2022

High-temperature augmented neighborhood metric learning for cross-domain fault diagnosis with imbalanced data.
Knowl. Based Syst., 2022

Policy evaluation from a single path: Multi-step methods, mixing and mis-specification.
CoRR, 2022

Adaptive and Robust Multi-task Learning.
CoRR, 2022

Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism.
Proceedings of the Tenth International Conference on Learning Representations, 2022

2021
Optimal policy evaluation using kernel-based temporal difference methods.
CoRR, 2021

PU-Flow: a Point Cloud Upsampling Networkwith Normalizing Flows.
CoRR, 2021

Bootstrapping Statistical Inference for Off-Policy Evaluation.
CoRR, 2021

Learning Good State and Action Representations via Tensor Decomposition.
Proceedings of the IEEE International Symposium on Information Theory, 2021

Bootstrapping Fitted Q-Evaluation for Off-Policy Inference.
Proceedings of the 38th International Conference on Machine Learning, 2021

Sparse Feature Selection Makes Batch Reinforcement Learning More Sample Efficient.
Proceedings of the 38th International Conference on Machine Learning, 2021

Risk Bounds and Rademacher Complexity in Batch Reinforcement Learning.
Proceedings of the 38th International Conference on Machine Learning, 2021

2020
Adaptive Low-Nonnegative-Rank Approximation for State Aggregation of Markov Chains.
SIAM J. Matrix Anal. Appl., 2020

Minimax-Optimal Off-Policy Evaluation with Linear Function Approximation.
CoRR, 2020

Minimax-Optimal Off-Policy Evaluation with Linear Function Approximation.
Proceedings of the 37th International Conference on Machine Learning, 2020

2019
Learning low-dimensional state embeddings and metastable clusters from time series data.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

State Aggregation Learning from Markov Transition Data.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019


  Loading...