Zeke Xie
Orcid: 0000-0003-4766-435X
According to our database1,
Zeke Xie
authored at least 25 papers
between 2017 and 2024.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
On csauthors.net:
Bibliography
2024
CoRR, 2024
CoRR, 2024
CoRR, 2024
Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents.
CoRR, 2024
CoRR, 2024
HiCAST: Highly Customized Arbitrary Style Transfer with Adapter Enhanced Diffusion Models.
CoRR, 2024
Variance-enlarged Poisson Learning for Graph-based Semi-Supervised Learning with Extremely Sparse Labeled Data.
Proceedings of the Twelfth International Conference on Learning Representations, 2024
Proceedings of the Twelfth International Conference on Learning Representations, 2024
2023
On the Overlooked Pitfalls of Weight Decay and How to Mitigate Them: A Gradient-Norm Perspective.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023
Proceedings of the Eleventh International Conference on Learning Representations, 2023
S3IM: Stochastic Structural SIMilarity and Its Unreasonable Effectiveness for Neural Fields.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023
2022
Rethinking the Structure of Stochastic Gradients: Empirical and Statistical Evidence.
CoRR, 2022
Proceedings of the International Conference on Machine Learning, 2022
Proceedings of the International Conference on Machine Learning, 2022
2021
Artificial Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic Forgetting.
Neural Comput., 2021
Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization.
Proceedings of the 38th International Conference on Machine Learning, 2021
A Diffusion Theory For Deep Learning Dynamics: Stochastic Gradient Descent Exponentially Favors Flat Minima.
Proceedings of the 9th International Conference on Learning Representations, 2021
2020
CoRR, 2020
A Diffusion Theory for Deep Learning Dynamics: Stochastic Gradient Descent Escapes From Sharp Minima Exponentially Fast.
CoRR, 2020
2017
Proceedings of The 9th Asian Conference on Machine Learning, 2017